2026-03-17 00:00:06.546472 | Job console starting 2026-03-17 00:00:06.595283 | Updating git repos 2026-03-17 00:00:06.706483 | Cloning repos into workspace 2026-03-17 00:00:07.058973 | Restoring repo states 2026-03-17 00:00:07.089373 | Merging changes 2026-03-17 00:00:07.089403 | Checking out repos 2026-03-17 00:00:07.500283 | Preparing playbooks 2026-03-17 00:00:08.783004 | Running Ansible setup 2026-03-17 00:00:15.914899 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-17 00:00:18.819535 | 2026-03-17 00:00:18.819736 | PLAY [Base pre] 2026-03-17 00:00:18.914415 | 2026-03-17 00:00:18.914599 | TASK [Setup log path fact] 2026-03-17 00:00:18.982656 | orchestrator | ok 2026-03-17 00:00:19.017692 | 2026-03-17 00:00:19.017868 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-17 00:00:19.098984 | orchestrator | ok 2026-03-17 00:00:19.144682 | 2026-03-17 00:00:19.144831 | TASK [emit-job-header : Print job information] 2026-03-17 00:00:19.249011 | # Job Information 2026-03-17 00:00:19.249242 | Ansible Version: 2.16.14 2026-03-17 00:00:19.249284 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-17 00:00:19.249318 | Pipeline: periodic-midnight 2026-03-17 00:00:19.249341 | Executor: 521e9411259a 2026-03-17 00:00:19.249362 | Triggered by: https://github.com/osism/testbed 2026-03-17 00:00:19.249384 | Event ID: 7f7c98d488164e9a90f8fe7794c9d4c5 2026-03-17 00:00:19.257390 | 2026-03-17 00:00:19.257527 | LOOP [emit-job-header : Print node information] 2026-03-17 00:00:19.731704 | orchestrator | ok: 2026-03-17 00:00:19.732007 | orchestrator | # Node Information 2026-03-17 00:00:19.732047 | orchestrator | Inventory Hostname: orchestrator 2026-03-17 00:00:19.732073 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-17 00:00:19.732096 | orchestrator | Username: zuul-testbed03 2026-03-17 00:00:19.732118 | orchestrator | Distro: Debian 12.13 2026-03-17 00:00:19.732141 | orchestrator | Provider: static-testbed 2026-03-17 00:00:19.732163 | orchestrator | Region: 2026-03-17 00:00:19.732185 | orchestrator | Label: testbed-orchestrator 2026-03-17 00:00:19.732238 | orchestrator | Product Name: OpenStack Nova 2026-03-17 00:00:19.732261 | orchestrator | Interface IP: 81.163.193.140 2026-03-17 00:00:19.761983 | 2026-03-17 00:00:19.762128 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-17 00:00:21.395899 | orchestrator -> localhost | changed 2026-03-17 00:00:21.404998 | 2026-03-17 00:00:21.405144 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-17 00:00:25.284012 | orchestrator -> localhost | changed 2026-03-17 00:00:25.298405 | 2026-03-17 00:00:25.298498 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-17 00:00:26.100956 | orchestrator -> localhost | ok 2026-03-17 00:00:26.106568 | 2026-03-17 00:00:26.106670 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-17 00:00:26.144497 | orchestrator | ok 2026-03-17 00:00:26.169481 | orchestrator | included: /var/lib/zuul/builds/3e0e57a4161f4df9aa9619c57544ea04/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-17 00:00:26.195444 | 2026-03-17 00:00:26.195552 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-17 00:00:33.551615 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-17 00:00:33.552725 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/3e0e57a4161f4df9aa9619c57544ea04/work/3e0e57a4161f4df9aa9619c57544ea04_id_rsa 2026-03-17 00:00:33.552783 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/3e0e57a4161f4df9aa9619c57544ea04/work/3e0e57a4161f4df9aa9619c57544ea04_id_rsa.pub 2026-03-17 00:00:33.552807 | orchestrator -> localhost | The key fingerprint is: 2026-03-17 00:00:33.552830 | orchestrator -> localhost | SHA256:DGbiZtmpfP4/kszRQlEE6sf+hl7L07GpmQYgqctEPKQ zuul-build-sshkey 2026-03-17 00:00:33.552849 | orchestrator -> localhost | The key's randomart image is: 2026-03-17 00:00:33.552875 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-17 00:00:33.552894 | orchestrator -> localhost | | .+o | 2026-03-17 00:00:33.552912 | orchestrator -> localhost | | . .. | 2026-03-17 00:00:33.552929 | orchestrator -> localhost | | + ..= . | 2026-03-17 00:00:33.552946 | orchestrator -> localhost | |E +.oB.=. | 2026-03-17 00:00:33.552962 | orchestrator -> localhost | | . o=.+oS. | 2026-03-17 00:00:33.552982 | orchestrator -> localhost | | o+ . o+ . . | 2026-03-17 00:00:33.552999 | orchestrator -> localhost | | o .o .o.*.. + | 2026-03-17 00:00:33.553015 | orchestrator -> localhost | | o o *+=++ | 2026-03-17 00:00:33.553032 | orchestrator -> localhost | | .oo=O+ | 2026-03-17 00:00:33.553049 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-17 00:00:33.553094 | orchestrator -> localhost | ok: Runtime: 0:00:06.160266 2026-03-17 00:00:33.558912 | 2026-03-17 00:00:33.558987 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-17 00:00:33.597392 | orchestrator | ok 2026-03-17 00:00:33.609507 | orchestrator | included: /var/lib/zuul/builds/3e0e57a4161f4df9aa9619c57544ea04/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-17 00:00:33.626825 | 2026-03-17 00:00:33.630812 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-17 00:00:33.662713 | orchestrator | skipping: Conditional result was False 2026-03-17 00:00:33.675923 | 2026-03-17 00:00:33.676015 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-17 00:00:34.579029 | orchestrator | changed 2026-03-17 00:00:34.584151 | 2026-03-17 00:00:34.584245 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-17 00:00:34.927220 | orchestrator | ok 2026-03-17 00:00:34.932315 | 2026-03-17 00:00:34.932397 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-17 00:00:35.441250 | orchestrator | ok 2026-03-17 00:00:35.449028 | 2026-03-17 00:00:35.449117 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-17 00:00:35.930476 | orchestrator | ok 2026-03-17 00:00:35.935534 | 2026-03-17 00:00:35.935614 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-17 00:00:36.001735 | orchestrator | skipping: Conditional result was False 2026-03-17 00:00:36.007897 | 2026-03-17 00:00:36.007989 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-17 00:00:37.271817 | orchestrator -> localhost | changed 2026-03-17 00:00:37.282725 | 2026-03-17 00:00:37.282819 | TASK [add-build-sshkey : Add back temp key] 2026-03-17 00:00:38.054066 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/3e0e57a4161f4df9aa9619c57544ea04/work/3e0e57a4161f4df9aa9619c57544ea04_id_rsa (zuul-build-sshkey) 2026-03-17 00:00:38.054268 | orchestrator -> localhost | ok: Runtime: 0:00:00.017514 2026-03-17 00:00:38.060162 | 2026-03-17 00:00:38.060262 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-17 00:00:38.637799 | orchestrator | ok 2026-03-17 00:00:38.645191 | 2026-03-17 00:00:38.645288 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-17 00:00:38.693822 | orchestrator | skipping: Conditional result was False 2026-03-17 00:00:38.830724 | 2026-03-17 00:00:38.830819 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-17 00:00:39.397694 | orchestrator | ok 2026-03-17 00:00:39.420621 | 2026-03-17 00:00:39.420722 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-17 00:00:39.488039 | orchestrator | ok 2026-03-17 00:00:39.494001 | 2026-03-17 00:00:39.494085 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-17 00:00:40.114723 | orchestrator -> localhost | ok 2026-03-17 00:00:40.120875 | 2026-03-17 00:00:40.120957 | TASK [validate-host : Collect information about the host] 2026-03-17 00:00:41.642639 | orchestrator | ok 2026-03-17 00:00:41.676851 | 2026-03-17 00:00:41.676956 | TASK [validate-host : Sanitize hostname] 2026-03-17 00:00:41.869781 | orchestrator | ok 2026-03-17 00:00:41.876782 | 2026-03-17 00:00:41.876871 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-17 00:00:42.963850 | orchestrator -> localhost | changed 2026-03-17 00:00:42.969120 | 2026-03-17 00:00:42.969216 | TASK [validate-host : Collect information about zuul worker] 2026-03-17 00:00:43.645622 | orchestrator | ok 2026-03-17 00:00:43.651877 | 2026-03-17 00:00:43.652247 | TASK [validate-host : Write out all zuul information for each host] 2026-03-17 00:00:45.114076 | orchestrator -> localhost | changed 2026-03-17 00:00:45.122623 | 2026-03-17 00:00:45.122710 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-17 00:00:45.414712 | orchestrator | ok 2026-03-17 00:00:45.419949 | 2026-03-17 00:00:45.420028 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-17 00:02:16.922479 | orchestrator | changed: 2026-03-17 00:02:16.922715 | orchestrator | .d..t...... src/ 2026-03-17 00:02:16.922753 | orchestrator | .d..t...... src/github.com/ 2026-03-17 00:02:16.922779 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-17 00:02:16.922801 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-17 00:02:16.922822 | orchestrator | RedHat.yml 2026-03-17 00:02:16.965348 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-17 00:02:16.965366 | orchestrator | RedHat.yml 2026-03-17 00:02:16.965418 | orchestrator | = 1.53.0"... 2026-03-17 00:02:27.487551 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-17 00:02:27.504049 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-17 00:02:27.630355 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-17 00:02:28.399698 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-17 00:02:28.457270 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-17 00:02:29.225673 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-17 00:02:29.280662 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-17 00:02:29.722549 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-17 00:02:29.722673 | orchestrator | 2026-03-17 00:02:29.722686 | orchestrator | Providers are signed by their developers. 2026-03-17 00:02:29.722693 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-17 00:02:29.722699 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-17 00:02:29.722707 | orchestrator | 2026-03-17 00:02:29.722712 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-17 00:02:29.722718 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-17 00:02:29.722734 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-17 00:02:29.722739 | orchestrator | you run "tofu init" in the future. 2026-03-17 00:02:29.722916 | orchestrator | 2026-03-17 00:02:29.722930 | orchestrator | OpenTofu has been successfully initialized! 2026-03-17 00:02:29.722958 | orchestrator | 2026-03-17 00:02:29.722964 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-17 00:02:29.722969 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-17 00:02:29.722983 | orchestrator | should now work. 2026-03-17 00:02:29.722991 | orchestrator | 2026-03-17 00:02:29.722996 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-17 00:02:29.723000 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-17 00:02:29.723006 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-17 00:02:29.884528 | orchestrator | Created and switched to workspace "ci"! 2026-03-17 00:02:29.884604 | orchestrator | 2026-03-17 00:02:29.884615 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-17 00:02:29.884624 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-17 00:02:29.884631 | orchestrator | for this configuration. 2026-03-17 00:02:29.978254 | orchestrator | ci.auto.tfvars 2026-03-17 00:02:30.199458 | orchestrator | default_custom.tf 2026-03-17 00:02:32.061809 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-17 00:02:32.565154 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-17 00:02:32.809816 | orchestrator | 2026-03-17 00:02:32.809880 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-17 00:02:32.809888 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-17 00:02:32.809893 | orchestrator | + create 2026-03-17 00:02:32.809898 | orchestrator | <= read (data resources) 2026-03-17 00:02:32.809903 | orchestrator | 2026-03-17 00:02:32.809907 | orchestrator | OpenTofu will perform the following actions: 2026-03-17 00:02:32.809919 | orchestrator | 2026-03-17 00:02:32.809923 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-17 00:02:32.809927 | orchestrator | # (config refers to values not yet known) 2026-03-17 00:02:32.809931 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-17 00:02:32.809936 | orchestrator | + checksum = (known after apply) 2026-03-17 00:02:32.809940 | orchestrator | + created_at = (known after apply) 2026-03-17 00:02:32.809944 | orchestrator | + file = (known after apply) 2026-03-17 00:02:32.809948 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.809969 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.809973 | orchestrator | + min_disk_gb = (known after apply) 2026-03-17 00:02:32.809977 | orchestrator | + min_ram_mb = (known after apply) 2026-03-17 00:02:32.809981 | orchestrator | + most_recent = true 2026-03-17 00:02:32.809985 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.809989 | orchestrator | + protected = (known after apply) 2026-03-17 00:02:32.809993 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.810000 | orchestrator | + schema = (known after apply) 2026-03-17 00:02:32.810004 | orchestrator | + size_bytes = (known after apply) 2026-03-17 00:02:32.810008 | orchestrator | + tags = (known after apply) 2026-03-17 00:02:32.810011 | orchestrator | + updated_at = (known after apply) 2026-03-17 00:02:32.810048 | orchestrator | } 2026-03-17 00:02:32.810055 | orchestrator | 2026-03-17 00:02:32.810059 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-17 00:02:32.810063 | orchestrator | # (config refers to values not yet known) 2026-03-17 00:02:32.810066 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-17 00:02:32.810070 | orchestrator | + checksum = (known after apply) 2026-03-17 00:02:32.810074 | orchestrator | + created_at = (known after apply) 2026-03-17 00:02:32.810078 | orchestrator | + file = (known after apply) 2026-03-17 00:02:32.810082 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.810085 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.810089 | orchestrator | + min_disk_gb = (known after apply) 2026-03-17 00:02:32.810093 | orchestrator | + min_ram_mb = (known after apply) 2026-03-17 00:02:32.810096 | orchestrator | + most_recent = true 2026-03-17 00:02:32.810100 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.810104 | orchestrator | + protected = (known after apply) 2026-03-17 00:02:32.810108 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.810111 | orchestrator | + schema = (known after apply) 2026-03-17 00:02:32.810115 | orchestrator | + size_bytes = (known after apply) 2026-03-17 00:02:32.810119 | orchestrator | + tags = (known after apply) 2026-03-17 00:02:32.810122 | orchestrator | + updated_at = (known after apply) 2026-03-17 00:02:32.810126 | orchestrator | } 2026-03-17 00:02:32.810131 | orchestrator | 2026-03-17 00:02:32.810135 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-17 00:02:32.810139 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-17 00:02:32.810143 | orchestrator | + content = (known after apply) 2026-03-17 00:02:32.810147 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:32.810151 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:32.810155 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:32.810158 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:32.810162 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:32.810166 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:32.810169 | orchestrator | + directory_permission = "0777" 2026-03-17 00:02:32.810173 | orchestrator | + file_permission = "0644" 2026-03-17 00:02:32.810177 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-17 00:02:32.810180 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.810184 | orchestrator | } 2026-03-17 00:02:32.811381 | orchestrator | 2026-03-17 00:02:32.811412 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-17 00:02:32.811419 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-17 00:02:32.811424 | orchestrator | + content = (known after apply) 2026-03-17 00:02:32.811428 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:32.811432 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:32.811435 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:32.811439 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:32.811443 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:32.811447 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:32.811451 | orchestrator | + directory_permission = "0777" 2026-03-17 00:02:32.811455 | orchestrator | + file_permission = "0644" 2026-03-17 00:02:32.811469 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-17 00:02:32.811473 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811477 | orchestrator | } 2026-03-17 00:02:32.811481 | orchestrator | 2026-03-17 00:02:32.811489 | orchestrator | # local_file.inventory will be created 2026-03-17 00:02:32.811493 | orchestrator | + resource "local_file" "inventory" { 2026-03-17 00:02:32.811497 | orchestrator | + content = (known after apply) 2026-03-17 00:02:32.811501 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:32.811504 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:32.811508 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:32.811512 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:32.811517 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:32.811521 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:32.811524 | orchestrator | + directory_permission = "0777" 2026-03-17 00:02:32.811528 | orchestrator | + file_permission = "0644" 2026-03-17 00:02:32.811532 | orchestrator | + filename = "inventory.ci" 2026-03-17 00:02:32.811535 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811539 | orchestrator | } 2026-03-17 00:02:32.811543 | orchestrator | 2026-03-17 00:02:32.811547 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-17 00:02:32.811550 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-17 00:02:32.811554 | orchestrator | + content = (sensitive value) 2026-03-17 00:02:32.811558 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-17 00:02:32.811561 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-17 00:02:32.811565 | orchestrator | + content_md5 = (known after apply) 2026-03-17 00:02:32.811569 | orchestrator | + content_sha1 = (known after apply) 2026-03-17 00:02:32.811572 | orchestrator | + content_sha256 = (known after apply) 2026-03-17 00:02:32.811576 | orchestrator | + content_sha512 = (known after apply) 2026-03-17 00:02:32.811580 | orchestrator | + directory_permission = "0700" 2026-03-17 00:02:32.811583 | orchestrator | + file_permission = "0600" 2026-03-17 00:02:32.811587 | orchestrator | + filename = ".id_rsa.ci" 2026-03-17 00:02:32.811591 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811594 | orchestrator | } 2026-03-17 00:02:32.811598 | orchestrator | 2026-03-17 00:02:32.811602 | orchestrator | # null_resource.node_semaphore will be created 2026-03-17 00:02:32.811605 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-17 00:02:32.811609 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811613 | orchestrator | } 2026-03-17 00:02:32.811617 | orchestrator | 2026-03-17 00:02:32.811621 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-17 00:02:32.811625 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-17 00:02:32.811628 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.811632 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.811635 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811639 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.811643 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.811647 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-17 00:02:32.811650 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.811654 | orchestrator | + size = 80 2026-03-17 00:02:32.811658 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.811661 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.811665 | orchestrator | } 2026-03-17 00:02:32.811669 | orchestrator | 2026-03-17 00:02:32.811672 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-17 00:02:32.811676 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.811680 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.811683 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.811687 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811694 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.811698 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.811702 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-17 00:02:32.811706 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.811709 | orchestrator | + size = 80 2026-03-17 00:02:32.811713 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.811717 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.811720 | orchestrator | } 2026-03-17 00:02:32.811724 | orchestrator | 2026-03-17 00:02:32.811728 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-17 00:02:32.811731 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.811735 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.811739 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.811742 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811746 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.811750 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.811753 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-17 00:02:32.811757 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.811761 | orchestrator | + size = 80 2026-03-17 00:02:32.811764 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.811768 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.811772 | orchestrator | } 2026-03-17 00:02:32.811775 | orchestrator | 2026-03-17 00:02:32.811790 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-17 00:02:32.811794 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.811797 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.811801 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.811811 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811815 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.811819 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.811823 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-17 00:02:32.811826 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.811830 | orchestrator | + size = 80 2026-03-17 00:02:32.811834 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.811837 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.811841 | orchestrator | } 2026-03-17 00:02:32.811845 | orchestrator | 2026-03-17 00:02:32.811848 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-17 00:02:32.811852 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.811856 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.811859 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.811863 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811867 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.811870 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.811876 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-17 00:02:32.811880 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.811884 | orchestrator | + size = 80 2026-03-17 00:02:32.811888 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.811891 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.811895 | orchestrator | } 2026-03-17 00:02:32.811899 | orchestrator | 2026-03-17 00:02:32.811902 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-17 00:02:32.811906 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.811910 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.811913 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.811917 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.811924 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.811928 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.811932 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-17 00:02:32.811935 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.811939 | orchestrator | + size = 80 2026-03-17 00:02:32.811943 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.811947 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.811950 | orchestrator | } 2026-03-17 00:02:32.814072 | orchestrator | 2026-03-17 00:02:32.814104 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-17 00:02:32.814109 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-17 00:02:32.814114 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814117 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814122 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814126 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.814129 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814134 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-17 00:02:32.814138 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814141 | orchestrator | + size = 80 2026-03-17 00:02:32.814145 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814149 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814153 | orchestrator | } 2026-03-17 00:02:32.814156 | orchestrator | 2026-03-17 00:02:32.814160 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-17 00:02:32.814166 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.814170 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814173 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814177 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814181 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814185 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-17 00:02:32.814189 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814192 | orchestrator | + size = 20 2026-03-17 00:02:32.814196 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814200 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814203 | orchestrator | } 2026-03-17 00:02:32.814207 | orchestrator | 2026-03-17 00:02:32.814211 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-17 00:02:32.814215 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.814218 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814222 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814226 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814229 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814233 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-17 00:02:32.814237 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814240 | orchestrator | + size = 20 2026-03-17 00:02:32.814244 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814248 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814251 | orchestrator | } 2026-03-17 00:02:32.814255 | orchestrator | 2026-03-17 00:02:32.814259 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-17 00:02:32.814263 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.814266 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814270 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814274 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814277 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814281 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-17 00:02:32.814285 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814297 | orchestrator | + size = 20 2026-03-17 00:02:32.814301 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814305 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814308 | orchestrator | } 2026-03-17 00:02:32.814312 | orchestrator | 2026-03-17 00:02:32.814316 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-17 00:02:32.814319 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.814323 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814327 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814331 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814334 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814338 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-17 00:02:32.814342 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814345 | orchestrator | + size = 20 2026-03-17 00:02:32.814349 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814353 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814356 | orchestrator | } 2026-03-17 00:02:32.814360 | orchestrator | 2026-03-17 00:02:32.814364 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-17 00:02:32.814367 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.814371 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814375 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814378 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814382 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814386 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-17 00:02:32.814389 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814396 | orchestrator | + size = 20 2026-03-17 00:02:32.814400 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814404 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814408 | orchestrator | } 2026-03-17 00:02:32.814411 | orchestrator | 2026-03-17 00:02:32.814415 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-17 00:02:32.814419 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.814422 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814426 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814430 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814433 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814437 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-17 00:02:32.814441 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814444 | orchestrator | + size = 20 2026-03-17 00:02:32.814448 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814452 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814456 | orchestrator | } 2026-03-17 00:02:32.814459 | orchestrator | 2026-03-17 00:02:32.814463 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-17 00:02:32.814467 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.814471 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814480 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814484 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814487 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814491 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-17 00:02:32.814495 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814498 | orchestrator | + size = 20 2026-03-17 00:02:32.814502 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814506 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814510 | orchestrator | } 2026-03-17 00:02:32.814513 | orchestrator | 2026-03-17 00:02:32.814517 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-17 00:02:32.814521 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.814529 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814532 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814536 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814540 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814543 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-17 00:02:32.814547 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814551 | orchestrator | + size = 20 2026-03-17 00:02:32.814555 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814558 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814562 | orchestrator | } 2026-03-17 00:02:32.814566 | orchestrator | 2026-03-17 00:02:32.814569 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-17 00:02:32.814573 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-17 00:02:32.814577 | orchestrator | + attachment = (known after apply) 2026-03-17 00:02:32.814581 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814584 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814588 | orchestrator | + metadata = (known after apply) 2026-03-17 00:02:32.814592 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-17 00:02:32.814595 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814599 | orchestrator | + size = 20 2026-03-17 00:02:32.814603 | orchestrator | + volume_retype_policy = "never" 2026-03-17 00:02:32.814606 | orchestrator | + volume_type = "ssd" 2026-03-17 00:02:32.814610 | orchestrator | } 2026-03-17 00:02:32.814614 | orchestrator | 2026-03-17 00:02:32.814618 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-17 00:02:32.814621 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-17 00:02:32.814625 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.814629 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.814632 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.814636 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.814640 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814643 | orchestrator | + config_drive = true 2026-03-17 00:02:32.814647 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.814651 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.814654 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-17 00:02:32.814658 | orchestrator | + force_delete = false 2026-03-17 00:02:32.814662 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.814665 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814669 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.814673 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.814676 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.814680 | orchestrator | + name = "testbed-manager" 2026-03-17 00:02:32.814683 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.814687 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814691 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.814694 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.814698 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.814702 | orchestrator | + user_data = (sensitive value) 2026-03-17 00:02:32.814705 | orchestrator | 2026-03-17 00:02:32.814709 | orchestrator | + block_device { 2026-03-17 00:02:32.814713 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.814717 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.814723 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.814727 | orchestrator | + multiattach = false 2026-03-17 00:02:32.814730 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.814734 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.814741 | orchestrator | } 2026-03-17 00:02:32.814745 | orchestrator | 2026-03-17 00:02:32.814749 | orchestrator | + network { 2026-03-17 00:02:32.814752 | orchestrator | + access_network = false 2026-03-17 00:02:32.814756 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.814760 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.814763 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.814767 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.814771 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.814774 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.814789 | orchestrator | } 2026-03-17 00:02:32.814793 | orchestrator | } 2026-03-17 00:02:32.814797 | orchestrator | 2026-03-17 00:02:32.814801 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-17 00:02:32.814805 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.814808 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.814812 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.814816 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.814819 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.814823 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.814827 | orchestrator | + config_drive = true 2026-03-17 00:02:32.814830 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.814834 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.814838 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.814841 | orchestrator | + force_delete = false 2026-03-17 00:02:32.814845 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.814849 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.814852 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.814856 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.814860 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.814867 | orchestrator | + name = "testbed-node-0" 2026-03-17 00:02:32.814871 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.814875 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.814878 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.814882 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.814886 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.814889 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.814893 | orchestrator | 2026-03-17 00:02:32.814897 | orchestrator | + block_device { 2026-03-17 00:02:32.814900 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.814904 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.814908 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.814911 | orchestrator | + multiattach = false 2026-03-17 00:02:32.814915 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.814919 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.814922 | orchestrator | } 2026-03-17 00:02:32.814926 | orchestrator | 2026-03-17 00:02:32.814930 | orchestrator | + network { 2026-03-17 00:02:32.814933 | orchestrator | + access_network = false 2026-03-17 00:02:32.814937 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.814941 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.814945 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.814948 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.814952 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.814956 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.814959 | orchestrator | } 2026-03-17 00:02:32.814963 | orchestrator | } 2026-03-17 00:02:32.814967 | orchestrator | 2026-03-17 00:02:32.814970 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-17 00:02:32.814974 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.814978 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.814986 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.814990 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.814994 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.814998 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.815001 | orchestrator | + config_drive = true 2026-03-17 00:02:32.815005 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.815008 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.815012 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.815016 | orchestrator | + force_delete = false 2026-03-17 00:02:32.815019 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.815023 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.815027 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.815030 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.815034 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.815038 | orchestrator | + name = "testbed-node-1" 2026-03-17 00:02:32.815041 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.815045 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.815049 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.815052 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.815056 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.815060 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.815064 | orchestrator | 2026-03-17 00:02:32.815067 | orchestrator | + block_device { 2026-03-17 00:02:32.815071 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.815075 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.815078 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.815082 | orchestrator | + multiattach = false 2026-03-17 00:02:32.815085 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.815089 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815093 | orchestrator | } 2026-03-17 00:02:32.815096 | orchestrator | 2026-03-17 00:02:32.815100 | orchestrator | + network { 2026-03-17 00:02:32.815104 | orchestrator | + access_network = false 2026-03-17 00:02:32.815107 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.815111 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.815115 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.815118 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.815122 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.815126 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815129 | orchestrator | } 2026-03-17 00:02:32.815133 | orchestrator | } 2026-03-17 00:02:32.815139 | orchestrator | 2026-03-17 00:02:32.815143 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-17 00:02:32.815146 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.815150 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.815154 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.815158 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.815161 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.815168 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.815171 | orchestrator | + config_drive = true 2026-03-17 00:02:32.815175 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.815179 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.815183 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.815186 | orchestrator | + force_delete = false 2026-03-17 00:02:32.815190 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.815194 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.815197 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.815204 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.815208 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.815211 | orchestrator | + name = "testbed-node-2" 2026-03-17 00:02:32.815232 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.815236 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.815239 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.815247 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.815251 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.815254 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.815258 | orchestrator | 2026-03-17 00:02:32.815262 | orchestrator | + block_device { 2026-03-17 00:02:32.815265 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.815269 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.815273 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.815276 | orchestrator | + multiattach = false 2026-03-17 00:02:32.815280 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.815284 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815287 | orchestrator | } 2026-03-17 00:02:32.815291 | orchestrator | 2026-03-17 00:02:32.815295 | orchestrator | + network { 2026-03-17 00:02:32.815299 | orchestrator | + access_network = false 2026-03-17 00:02:32.815302 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.815306 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.815310 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.815313 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.815317 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.815320 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815324 | orchestrator | } 2026-03-17 00:02:32.815328 | orchestrator | } 2026-03-17 00:02:32.815331 | orchestrator | 2026-03-17 00:02:32.815335 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-17 00:02:32.815339 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.815343 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.815346 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.815350 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.815353 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.815357 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.815361 | orchestrator | + config_drive = true 2026-03-17 00:02:32.815364 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.815368 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.815372 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.815375 | orchestrator | + force_delete = false 2026-03-17 00:02:32.815379 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.815383 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.815386 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.815390 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.815394 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.815397 | orchestrator | + name = "testbed-node-3" 2026-03-17 00:02:32.815413 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.815417 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.815421 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.815424 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.815428 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.815432 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.815436 | orchestrator | 2026-03-17 00:02:32.815439 | orchestrator | + block_device { 2026-03-17 00:02:32.815446 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.815450 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.815453 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.815460 | orchestrator | + multiattach = false 2026-03-17 00:02:32.815464 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.815468 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815471 | orchestrator | } 2026-03-17 00:02:32.815475 | orchestrator | 2026-03-17 00:02:32.815479 | orchestrator | + network { 2026-03-17 00:02:32.815482 | orchestrator | + access_network = false 2026-03-17 00:02:32.815486 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.815490 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.815493 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.815497 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.815501 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.815504 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815508 | orchestrator | } 2026-03-17 00:02:32.815512 | orchestrator | } 2026-03-17 00:02:32.815517 | orchestrator | 2026-03-17 00:02:32.815521 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-17 00:02:32.815525 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.815528 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.815532 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.815536 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.815540 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.815543 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.815547 | orchestrator | + config_drive = true 2026-03-17 00:02:32.815550 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.815554 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.815558 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.815561 | orchestrator | + force_delete = false 2026-03-17 00:02:32.815565 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.815569 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.815572 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.815576 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.815580 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.815583 | orchestrator | + name = "testbed-node-4" 2026-03-17 00:02:32.815587 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.815591 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.815594 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.815598 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.815602 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.815605 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.815609 | orchestrator | 2026-03-17 00:02:32.815613 | orchestrator | + block_device { 2026-03-17 00:02:32.815616 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.815620 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.815624 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.815627 | orchestrator | + multiattach = false 2026-03-17 00:02:32.815631 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.815634 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815638 | orchestrator | } 2026-03-17 00:02:32.815642 | orchestrator | 2026-03-17 00:02:32.815646 | orchestrator | + network { 2026-03-17 00:02:32.815649 | orchestrator | + access_network = false 2026-03-17 00:02:32.815653 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.815656 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.815660 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.815664 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.815667 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.815671 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815675 | orchestrator | } 2026-03-17 00:02:32.815678 | orchestrator | } 2026-03-17 00:02:32.815687 | orchestrator | 2026-03-17 00:02:32.815690 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-17 00:02:32.815694 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-17 00:02:32.815698 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-17 00:02:32.815701 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-17 00:02:32.815705 | orchestrator | + all_metadata = (known after apply) 2026-03-17 00:02:32.815709 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.815712 | orchestrator | + availability_zone = "nova" 2026-03-17 00:02:32.815716 | orchestrator | + config_drive = true 2026-03-17 00:02:32.815720 | orchestrator | + created = (known after apply) 2026-03-17 00:02:32.815723 | orchestrator | + flavor_id = (known after apply) 2026-03-17 00:02:32.815727 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-17 00:02:32.815731 | orchestrator | + force_delete = false 2026-03-17 00:02:32.815737 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-17 00:02:32.815741 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.815744 | orchestrator | + image_id = (known after apply) 2026-03-17 00:02:32.815748 | orchestrator | + image_name = (known after apply) 2026-03-17 00:02:32.815752 | orchestrator | + key_pair = "testbed" 2026-03-17 00:02:32.815756 | orchestrator | + name = "testbed-node-5" 2026-03-17 00:02:32.815759 | orchestrator | + power_state = "active" 2026-03-17 00:02:32.815763 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.815767 | orchestrator | + security_groups = (known after apply) 2026-03-17 00:02:32.815770 | orchestrator | + stop_before_destroy = false 2026-03-17 00:02:32.815774 | orchestrator | + updated = (known after apply) 2026-03-17 00:02:32.815787 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-17 00:02:32.815791 | orchestrator | 2026-03-17 00:02:32.815795 | orchestrator | + block_device { 2026-03-17 00:02:32.815799 | orchestrator | + boot_index = 0 2026-03-17 00:02:32.815802 | orchestrator | + delete_on_termination = false 2026-03-17 00:02:32.815806 | orchestrator | + destination_type = "volume" 2026-03-17 00:02:32.815810 | orchestrator | + multiattach = false 2026-03-17 00:02:32.815813 | orchestrator | + source_type = "volume" 2026-03-17 00:02:32.815817 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815821 | orchestrator | } 2026-03-17 00:02:32.815824 | orchestrator | 2026-03-17 00:02:32.815828 | orchestrator | + network { 2026-03-17 00:02:32.815832 | orchestrator | + access_network = false 2026-03-17 00:02:32.815835 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-17 00:02:32.815839 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-17 00:02:32.815843 | orchestrator | + mac = (known after apply) 2026-03-17 00:02:32.815846 | orchestrator | + name = (known after apply) 2026-03-17 00:02:32.815850 | orchestrator | + port = (known after apply) 2026-03-17 00:02:32.815854 | orchestrator | + uuid = (known after apply) 2026-03-17 00:02:32.815858 | orchestrator | } 2026-03-17 00:02:32.815861 | orchestrator | } 2026-03-17 00:02:32.815865 | orchestrator | 2026-03-17 00:02:32.815869 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-17 00:02:32.815872 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-17 00:02:32.815876 | orchestrator | + fingerprint = (known after apply) 2026-03-17 00:02:32.815880 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.815883 | orchestrator | + name = "testbed" 2026-03-17 00:02:32.815887 | orchestrator | + private_key = (sensitive value) 2026-03-17 00:02:32.815890 | orchestrator | + public_key = (known after apply) 2026-03-17 00:02:32.815894 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.815898 | orchestrator | + user_id = (known after apply) 2026-03-17 00:02:32.815901 | orchestrator | } 2026-03-17 00:02:32.815905 | orchestrator | 2026-03-17 00:02:32.815909 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-17 00:02:32.815913 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.815919 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.815923 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.815927 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.815930 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.815934 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.815938 | orchestrator | } 2026-03-17 00:02:32.815941 | orchestrator | 2026-03-17 00:02:32.815945 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-17 00:02:32.815949 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.815952 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.815956 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.815960 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.815963 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.815967 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.815971 | orchestrator | } 2026-03-17 00:02:32.815976 | orchestrator | 2026-03-17 00:02:32.815980 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-17 00:02:32.815984 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.815987 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.815991 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.815995 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.815998 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816002 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.816006 | orchestrator | } 2026-03-17 00:02:32.816009 | orchestrator | 2026-03-17 00:02:32.816013 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-17 00:02:32.816017 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.816020 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.816024 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816028 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.816031 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816035 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.816039 | orchestrator | } 2026-03-17 00:02:32.816042 | orchestrator | 2026-03-17 00:02:32.816046 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-17 00:02:32.816050 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.816053 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.816057 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816061 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.816067 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816071 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.816075 | orchestrator | } 2026-03-17 00:02:32.816078 | orchestrator | 2026-03-17 00:02:32.816082 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-17 00:02:32.816086 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.816089 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.816093 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816097 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.816100 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816104 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.816107 | orchestrator | } 2026-03-17 00:02:32.816111 | orchestrator | 2026-03-17 00:02:32.816115 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-17 00:02:32.816118 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.816122 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.816126 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816129 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.816133 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816140 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.816144 | orchestrator | } 2026-03-17 00:02:32.816147 | orchestrator | 2026-03-17 00:02:32.816151 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-17 00:02:32.816155 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.816158 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.816162 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816166 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.816169 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816173 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.816177 | orchestrator | } 2026-03-17 00:02:32.816180 | orchestrator | 2026-03-17 00:02:32.816184 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-17 00:02:32.816188 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-17 00:02:32.816192 | orchestrator | + device = (known after apply) 2026-03-17 00:02:32.816195 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816199 | orchestrator | + instance_id = (known after apply) 2026-03-17 00:02:32.816203 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816206 | orchestrator | + volume_id = (known after apply) 2026-03-17 00:02:32.816210 | orchestrator | } 2026-03-17 00:02:32.816215 | orchestrator | 2026-03-17 00:02:32.816219 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-17 00:02:32.816223 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-17 00:02:32.816227 | orchestrator | + fixed_ip = (known after apply) 2026-03-17 00:02:32.816231 | orchestrator | + floating_ip = (known after apply) 2026-03-17 00:02:32.816235 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816238 | orchestrator | + port_id = (known after apply) 2026-03-17 00:02:32.816242 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816245 | orchestrator | } 2026-03-17 00:02:32.816249 | orchestrator | 2026-03-17 00:02:32.816253 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-17 00:02:32.816257 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-17 00:02:32.816260 | orchestrator | + address = (known after apply) 2026-03-17 00:02:32.816264 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.816268 | orchestrator | + dns_domain = (known after apply) 2026-03-17 00:02:32.816271 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.816275 | orchestrator | + fixed_ip = (known after apply) 2026-03-17 00:02:32.816279 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816282 | orchestrator | + pool = "public" 2026-03-17 00:02:32.816286 | orchestrator | + port_id = (known after apply) 2026-03-17 00:02:32.816290 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816293 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.816297 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.816301 | orchestrator | } 2026-03-17 00:02:32.816304 | orchestrator | 2026-03-17 00:02:32.816308 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-17 00:02:32.816312 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-17 00:02:32.816316 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.816319 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.816323 | orchestrator | + availability_zone_hints = [ 2026-03-17 00:02:32.816327 | orchestrator | + "nova", 2026-03-17 00:02:32.816330 | orchestrator | ] 2026-03-17 00:02:32.816334 | orchestrator | + dns_domain = (known after apply) 2026-03-17 00:02:32.816338 | orchestrator | + external = (known after apply) 2026-03-17 00:02:32.816341 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816345 | orchestrator | + mtu = (known after apply) 2026-03-17 00:02:32.816349 | orchestrator | + name = "net-testbed-management" 2026-03-17 00:02:32.816352 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.816359 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.816362 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816366 | orchestrator | + shared = (known after apply) 2026-03-17 00:02:32.816370 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.816374 | orchestrator | + transparent_vlan = (known after apply) 2026-03-17 00:02:32.816377 | orchestrator | 2026-03-17 00:02:32.816381 | orchestrator | + segments (known after apply) 2026-03-17 00:02:32.816385 | orchestrator | } 2026-03-17 00:02:32.816390 | orchestrator | 2026-03-17 00:02:32.816394 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-17 00:02:32.816398 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-17 00:02:32.816401 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.816405 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.816409 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.816415 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.816419 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.816422 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.816426 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.816430 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.816434 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.816437 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.816441 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.816445 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.816448 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.816452 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.816456 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.816460 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.816463 | orchestrator | 2026-03-17 00:02:32.816467 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.816471 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.816475 | orchestrator | } 2026-03-17 00:02:32.816478 | orchestrator | 2026-03-17 00:02:32.816482 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.816486 | orchestrator | 2026-03-17 00:02:32.816490 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.816494 | orchestrator | + ip_address = "192.168.16.5" 2026-03-17 00:02:32.816498 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.816501 | orchestrator | } 2026-03-17 00:02:32.816505 | orchestrator | } 2026-03-17 00:02:32.818089 | orchestrator | 2026-03-17 00:02:32.818099 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-17 00:02:32.818103 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.818107 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.818111 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.818115 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.818118 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.818122 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.818126 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.818130 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.818134 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.818138 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.818142 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.818146 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.818149 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.818153 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.818157 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.818169 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.818173 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.818177 | orchestrator | 2026-03-17 00:02:32.818181 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818185 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.818189 | orchestrator | } 2026-03-17 00:02:32.818193 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818197 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.818201 | orchestrator | } 2026-03-17 00:02:32.818205 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818209 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.818212 | orchestrator | } 2026-03-17 00:02:32.818216 | orchestrator | 2026-03-17 00:02:32.818220 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.818224 | orchestrator | 2026-03-17 00:02:32.818228 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.818232 | orchestrator | + ip_address = "192.168.16.10" 2026-03-17 00:02:32.818236 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.818240 | orchestrator | } 2026-03-17 00:02:32.818244 | orchestrator | } 2026-03-17 00:02:32.818248 | orchestrator | 2026-03-17 00:02:32.818252 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-17 00:02:32.818256 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.818259 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.818263 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.818267 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.818271 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.818275 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.818279 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.818283 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.818287 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.818291 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.818294 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.818298 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.818302 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.818306 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.818309 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.818313 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.818317 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.818320 | orchestrator | 2026-03-17 00:02:32.818324 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818328 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.818332 | orchestrator | } 2026-03-17 00:02:32.818335 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818339 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.818343 | orchestrator | } 2026-03-17 00:02:32.818346 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818350 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.818354 | orchestrator | } 2026-03-17 00:02:32.818358 | orchestrator | 2026-03-17 00:02:32.818361 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.818365 | orchestrator | 2026-03-17 00:02:32.818369 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.818372 | orchestrator | + ip_address = "192.168.16.11" 2026-03-17 00:02:32.818376 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.818380 | orchestrator | } 2026-03-17 00:02:32.818383 | orchestrator | } 2026-03-17 00:02:32.818387 | orchestrator | 2026-03-17 00:02:32.818391 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-17 00:02:32.818395 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.818398 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.818402 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.818406 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.818410 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.818416 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.818420 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.818423 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.818427 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.818434 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.818438 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.818442 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.818446 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.818449 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.818453 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.818457 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.818460 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.818464 | orchestrator | 2026-03-17 00:02:32.818468 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818472 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.818475 | orchestrator | } 2026-03-17 00:02:32.818479 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818483 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.818486 | orchestrator | } 2026-03-17 00:02:32.818490 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818494 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.818498 | orchestrator | } 2026-03-17 00:02:32.818501 | orchestrator | 2026-03-17 00:02:32.818509 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.818513 | orchestrator | 2026-03-17 00:02:32.818516 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.818520 | orchestrator | + ip_address = "192.168.16.12" 2026-03-17 00:02:32.818524 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.818528 | orchestrator | } 2026-03-17 00:02:32.818532 | orchestrator | } 2026-03-17 00:02:32.818536 | orchestrator | 2026-03-17 00:02:32.818540 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-17 00:02:32.818544 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.818548 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.818551 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.818555 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.818559 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.818563 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.818567 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.818571 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.818575 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.818578 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.818582 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.818586 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.818590 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.818594 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.818598 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.818602 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.818606 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.818609 | orchestrator | 2026-03-17 00:02:32.818613 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818617 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.818621 | orchestrator | } 2026-03-17 00:02:32.818625 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818629 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.818633 | orchestrator | } 2026-03-17 00:02:32.818637 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818640 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.818644 | orchestrator | } 2026-03-17 00:02:32.818648 | orchestrator | 2026-03-17 00:02:32.818657 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.818661 | orchestrator | 2026-03-17 00:02:32.818665 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.818669 | orchestrator | + ip_address = "192.168.16.13" 2026-03-17 00:02:32.818672 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.818676 | orchestrator | } 2026-03-17 00:02:32.818680 | orchestrator | } 2026-03-17 00:02:32.818684 | orchestrator | 2026-03-17 00:02:32.818688 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-17 00:02:32.818692 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.818696 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.818700 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.818704 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.818708 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.818711 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.818715 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.818719 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.818723 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.818727 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.818731 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.818735 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.818738 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.818742 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.818746 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.818750 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.818754 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.818758 | orchestrator | 2026-03-17 00:02:32.818762 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818766 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.818770 | orchestrator | } 2026-03-17 00:02:32.818774 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818805 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.818809 | orchestrator | } 2026-03-17 00:02:32.818813 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818817 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.818820 | orchestrator | } 2026-03-17 00:02:32.818824 | orchestrator | 2026-03-17 00:02:32.818828 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.818831 | orchestrator | 2026-03-17 00:02:32.818835 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.818839 | orchestrator | + ip_address = "192.168.16.14" 2026-03-17 00:02:32.818842 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.818846 | orchestrator | } 2026-03-17 00:02:32.818850 | orchestrator | } 2026-03-17 00:02:32.818853 | orchestrator | 2026-03-17 00:02:32.818857 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-17 00:02:32.818861 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-17 00:02:32.818865 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.818868 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-17 00:02:32.818872 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-17 00:02:32.818876 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.818879 | orchestrator | + device_id = (known after apply) 2026-03-17 00:02:32.818883 | orchestrator | + device_owner = (known after apply) 2026-03-17 00:02:32.818887 | orchestrator | + dns_assignment = (known after apply) 2026-03-17 00:02:32.818890 | orchestrator | + dns_name = (known after apply) 2026-03-17 00:02:32.818894 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.818898 | orchestrator | + mac_address = (known after apply) 2026-03-17 00:02:32.818901 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.818905 | orchestrator | + port_security_enabled = (known after apply) 2026-03-17 00:02:32.818909 | orchestrator | + qos_policy_id = (known after apply) 2026-03-17 00:02:32.818915 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.818919 | orchestrator | + security_group_ids = (known after apply) 2026-03-17 00:02:32.818923 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.818926 | orchestrator | 2026-03-17 00:02:32.818930 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818934 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-17 00:02:32.818937 | orchestrator | } 2026-03-17 00:02:32.818941 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818948 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-17 00:02:32.818952 | orchestrator | } 2026-03-17 00:02:32.818955 | orchestrator | + allowed_address_pairs { 2026-03-17 00:02:32.818959 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-17 00:02:32.818963 | orchestrator | } 2026-03-17 00:02:32.818966 | orchestrator | 2026-03-17 00:02:32.818973 | orchestrator | + binding (known after apply) 2026-03-17 00:02:32.818976 | orchestrator | 2026-03-17 00:02:32.818980 | orchestrator | + fixed_ip { 2026-03-17 00:02:32.818984 | orchestrator | + ip_address = "192.168.16.15" 2026-03-17 00:02:32.818987 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.818991 | orchestrator | } 2026-03-17 00:02:32.818995 | orchestrator | } 2026-03-17 00:02:32.818998 | orchestrator | 2026-03-17 00:02:32.819002 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-17 00:02:32.819006 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-17 00:02:32.819010 | orchestrator | + force_destroy = false 2026-03-17 00:02:32.819013 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.819017 | orchestrator | + port_id = (known after apply) 2026-03-17 00:02:32.819021 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.819024 | orchestrator | + router_id = (known after apply) 2026-03-17 00:02:32.819028 | orchestrator | + subnet_id = (known after apply) 2026-03-17 00:02:32.819032 | orchestrator | } 2026-03-17 00:02:32.819035 | orchestrator | 2026-03-17 00:02:32.819039 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-17 00:02:32.819043 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-17 00:02:32.819046 | orchestrator | + admin_state_up = (known after apply) 2026-03-17 00:02:32.819050 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.819054 | orchestrator | + availability_zone_hints = [ 2026-03-17 00:02:32.819057 | orchestrator | + "nova", 2026-03-17 00:02:32.819061 | orchestrator | ] 2026-03-17 00:02:32.819065 | orchestrator | + distributed = (known after apply) 2026-03-17 00:02:32.819069 | orchestrator | + enable_snat = (known after apply) 2026-03-17 00:02:32.819072 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-17 00:02:32.819076 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-17 00:02:32.819080 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.819083 | orchestrator | + name = "testbed" 2026-03-17 00:02:32.819087 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.819091 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.819095 | orchestrator | 2026-03-17 00:02:32.819098 | orchestrator | + external_fixed_ip (known after apply) 2026-03-17 00:02:32.819102 | orchestrator | } 2026-03-17 00:02:32.819106 | orchestrator | 2026-03-17 00:02:32.819109 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-17 00:02:32.819113 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-17 00:02:32.819117 | orchestrator | + description = "ssh" 2026-03-17 00:02:32.819121 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.819124 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.819128 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.819132 | orchestrator | + port_range_max = 22 2026-03-17 00:02:32.819135 | orchestrator | + port_range_min = 22 2026-03-17 00:02:32.819139 | orchestrator | + protocol = "tcp" 2026-03-17 00:02:32.819143 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.819150 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.819153 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.819157 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.819161 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.819164 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.819168 | orchestrator | } 2026-03-17 00:02:32.819172 | orchestrator | 2026-03-17 00:02:32.819175 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-17 00:02:32.819179 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-17 00:02:32.819183 | orchestrator | + description = "wireguard" 2026-03-17 00:02:32.819187 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.819190 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.819194 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.819198 | orchestrator | + port_range_max = 51820 2026-03-17 00:02:32.819201 | orchestrator | + port_range_min = 51820 2026-03-17 00:02:32.819205 | orchestrator | + protocol = "udp" 2026-03-17 00:02:32.819209 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.819212 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.819216 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.819220 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.819223 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.819227 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.819231 | orchestrator | } 2026-03-17 00:02:32.819234 | orchestrator | 2026-03-17 00:02:32.819238 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-17 00:02:32.819242 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-17 00:02:32.819245 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.819249 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.819253 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.819256 | orchestrator | + protocol = "tcp" 2026-03-17 00:02:32.819260 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.819264 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.819267 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.819271 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-17 00:02:32.819275 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.819278 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.819282 | orchestrator | } 2026-03-17 00:02:32.819286 | orchestrator | 2026-03-17 00:02:32.819289 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-17 00:02:32.819293 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-17 00:02:32.819297 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.819301 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.819304 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.819309 | orchestrator | + protocol = "udp" 2026-03-17 00:02:32.819313 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.819317 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.819321 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.819324 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-17 00:02:32.819328 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.819332 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.819336 | orchestrator | } 2026-03-17 00:02:32.821475 | orchestrator | 2026-03-17 00:02:32.821502 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-17 00:02:32.821517 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-17 00:02:32.821520 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.821524 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.821528 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.821532 | orchestrator | + protocol = "icmp" 2026-03-17 00:02:32.821536 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.821539 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.821543 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.821547 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.821550 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.821554 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.821558 | orchestrator | } 2026-03-17 00:02:32.821565 | orchestrator | 2026-03-17 00:02:32.821569 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-17 00:02:32.821573 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-17 00:02:32.821577 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.821580 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.821584 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.821588 | orchestrator | + protocol = "tcp" 2026-03-17 00:02:32.821592 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.821595 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.821604 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.821608 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.821612 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.821615 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.821619 | orchestrator | } 2026-03-17 00:02:32.821624 | orchestrator | 2026-03-17 00:02:32.821628 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-17 00:02:32.821632 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-17 00:02:32.821635 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.821639 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.821643 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.821646 | orchestrator | + protocol = "udp" 2026-03-17 00:02:32.821650 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.821654 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.821657 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.821661 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.821665 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.821668 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.821672 | orchestrator | } 2026-03-17 00:02:32.821677 | orchestrator | 2026-03-17 00:02:32.821681 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-17 00:02:32.821684 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-17 00:02:32.821688 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.821694 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.821698 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.821701 | orchestrator | + protocol = "icmp" 2026-03-17 00:02:32.821705 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.821709 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.821712 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.821716 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.821720 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.821724 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.821731 | orchestrator | } 2026-03-17 00:02:32.826073 | orchestrator | 2026-03-17 00:02:32.826106 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-17 00:02:32.826111 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-17 00:02:32.826116 | orchestrator | + description = "vrrp" 2026-03-17 00:02:32.826120 | orchestrator | + direction = "ingress" 2026-03-17 00:02:32.826124 | orchestrator | + ethertype = "IPv4" 2026-03-17 00:02:32.826128 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.826132 | orchestrator | + protocol = "112" 2026-03-17 00:02:32.826136 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.826140 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-17 00:02:32.826144 | orchestrator | + remote_group_id = (known after apply) 2026-03-17 00:02:32.826147 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-17 00:02:32.826151 | orchestrator | + security_group_id = (known after apply) 2026-03-17 00:02:32.826155 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.826158 | orchestrator | } 2026-03-17 00:02:32.826162 | orchestrator | 2026-03-17 00:02:32.826166 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-17 00:02:32.826170 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-17 00:02:32.826173 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.826177 | orchestrator | + description = "management security group" 2026-03-17 00:02:32.826181 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.826184 | orchestrator | + name = "testbed-management" 2026-03-17 00:02:32.826188 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.826192 | orchestrator | + stateful = (known after apply) 2026-03-17 00:02:32.826195 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.826199 | orchestrator | } 2026-03-17 00:02:32.826203 | orchestrator | 2026-03-17 00:02:32.826206 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-17 00:02:32.826210 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-17 00:02:32.826214 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.826217 | orchestrator | + description = "node security group" 2026-03-17 00:02:32.826221 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.826225 | orchestrator | + name = "testbed-node" 2026-03-17 00:02:32.826228 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.826232 | orchestrator | + stateful = (known after apply) 2026-03-17 00:02:32.826236 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.826239 | orchestrator | } 2026-03-17 00:02:32.826243 | orchestrator | 2026-03-17 00:02:32.826246 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-17 00:02:32.826250 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-17 00:02:32.826254 | orchestrator | + all_tags = (known after apply) 2026-03-17 00:02:32.826258 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-17 00:02:32.826261 | orchestrator | + dns_nameservers = [ 2026-03-17 00:02:32.826265 | orchestrator | + "8.8.8.8", 2026-03-17 00:02:32.826269 | orchestrator | + "9.9.9.9", 2026-03-17 00:02:32.826273 | orchestrator | ] 2026-03-17 00:02:32.826277 | orchestrator | + enable_dhcp = true 2026-03-17 00:02:32.826281 | orchestrator | + gateway_ip = (known after apply) 2026-03-17 00:02:32.826284 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.826288 | orchestrator | + ip_version = 4 2026-03-17 00:02:32.826292 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-17 00:02:32.826295 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-17 00:02:32.826299 | orchestrator | + name = "subnet-testbed-management" 2026-03-17 00:02:32.826303 | orchestrator | + network_id = (known after apply) 2026-03-17 00:02:32.826306 | orchestrator | + no_gateway = false 2026-03-17 00:02:32.826310 | orchestrator | + region = (known after apply) 2026-03-17 00:02:32.826314 | orchestrator | + service_types = (known after apply) 2026-03-17 00:02:32.826327 | orchestrator | + tenant_id = (known after apply) 2026-03-17 00:02:32.826331 | orchestrator | 2026-03-17 00:02:32.826335 | orchestrator | + allocation_pool { 2026-03-17 00:02:32.826338 | orchestrator | + end = "192.168.31.250" 2026-03-17 00:02:32.826342 | orchestrator | + start = "192.168.31.200" 2026-03-17 00:02:32.826346 | orchestrator | } 2026-03-17 00:02:32.826350 | orchestrator | } 2026-03-17 00:02:32.826353 | orchestrator | 2026-03-17 00:02:32.826357 | orchestrator | # terraform_data.image will be created 2026-03-17 00:02:32.826361 | orchestrator | + resource "terraform_data" "image" { 2026-03-17 00:02:32.826364 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.826368 | orchestrator | + input = "Ubuntu 24.04" 2026-03-17 00:02:32.826372 | orchestrator | + output = (known after apply) 2026-03-17 00:02:32.826375 | orchestrator | } 2026-03-17 00:02:32.826379 | orchestrator | 2026-03-17 00:02:32.826383 | orchestrator | # terraform_data.image_node will be created 2026-03-17 00:02:32.826386 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-17 00:02:32.826390 | orchestrator | + id = (known after apply) 2026-03-17 00:02:32.826394 | orchestrator | + input = "Ubuntu 24.04" 2026-03-17 00:02:32.826397 | orchestrator | + output = (known after apply) 2026-03-17 00:02:32.826401 | orchestrator | } 2026-03-17 00:02:32.826405 | orchestrator | 2026-03-17 00:02:32.826408 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-17 00:02:32.826412 | orchestrator | 2026-03-17 00:02:32.826416 | orchestrator | Changes to Outputs: 2026-03-17 00:02:32.826419 | orchestrator | + manager_address = (sensitive value) 2026-03-17 00:02:32.826423 | orchestrator | + private_key = (sensitive value) 2026-03-17 00:02:33.049509 | orchestrator | terraform_data.image: Creating... 2026-03-17 00:02:33.052044 | orchestrator | terraform_data.image: Creation complete after 0s [id=0b8e0678-4465-fe06-7171-68b9bc68c5fa] 2026-03-17 00:02:33.052062 | orchestrator | terraform_data.image_node: Creating... 2026-03-17 00:02:33.052550 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=ed3b5183-12e2-b634-4d76-2cd0bdc70ec7] 2026-03-17 00:02:33.082097 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-17 00:02:33.082152 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-17 00:02:33.090077 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-17 00:02:33.102636 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-17 00:02:33.106079 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-17 00:02:33.106118 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-17 00:02:33.106123 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-17 00:02:33.106127 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-17 00:02:33.106212 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-17 00:02:33.108738 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-17 00:02:33.655442 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-17 00:02:34.036589 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-17 00:02:34.036643 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-17 00:02:34.036652 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-17 00:02:34.036657 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-17 00:02:34.036661 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-17 00:02:34.285301 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=2b56f1e9-2a32-489a-b32c-56c3c5bc8b5d] 2026-03-17 00:02:34.294632 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-17 00:02:36.817400 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=3efb5a56-103b-42d9-8866-8efb8a438184] 2026-03-17 00:02:36.832445 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=5cc759d4-bbcf-4791-ab44-d26d1bbabcc1] 2026-03-17 00:02:36.833259 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-17 00:02:36.837697 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=81ed8c1c36fdeab5e6536ad72db5090d2d078329] 2026-03-17 00:02:36.839960 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=c89d09f1-caef-4162-a829-09cd388ce865] 2026-03-17 00:02:36.841408 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-17 00:02:36.842560 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-17 00:02:36.844272 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-17 00:02:36.854349 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=d717cdad-60c8-49b4-a1ca-e286e86fc235] 2026-03-17 00:02:36.864165 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-17 00:02:36.882581 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=792a3cd6-8361-4aa2-9d0e-e1d89bff3276] 2026-03-17 00:02:36.885395 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=23482283-1618-4112-88d0-516e8abcc23d] 2026-03-17 00:02:36.889179 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-17 00:02:36.894838 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-17 00:02:36.899311 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=c1f33796a61ea0e44a3b6e7813440e55435f5997] 2026-03-17 00:02:36.903763 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-17 00:02:36.951585 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=d1d144f4-1f7d-43cf-b529-b5ecced41bc7] 2026-03-17 00:02:36.965650 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=c18a6eac-daa9-4a49-b877-784985e05b4b] 2026-03-17 00:02:36.965745 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-17 00:02:36.974401 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=d8c7f886-b638-428f-9acd-2bef6a3abd32] 2026-03-17 00:02:37.649542 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=f9f9bdf5-53bb-40c1-a0f3-235d84124d2c] 2026-03-17 00:02:37.960300 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=82ce42bc-a290-4c5e-bfd1-6464253354fa] 2026-03-17 00:02:37.970250 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-17 00:02:40.279423 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=22c407cf-e116-4808-97ee-42321e6f678c] 2026-03-17 00:02:40.320924 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=7eabd72e-ea70-47b9-ae5c-bbb511389266] 2026-03-17 00:02:40.337191 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=5fc221d6-1f30-457e-9b4e-578a7aeb5c88] 2026-03-17 00:02:40.391903 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=d1d4b81a-b793-41a0-ad40-9abf2e7492cb] 2026-03-17 00:02:40.405442 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=a0ccb72e-5500-4916-8b26-16a7320e18ef] 2026-03-17 00:02:40.406645 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=dcece8a6-a124-4356-af52-fd20405fc0e0] 2026-03-17 00:02:40.831369 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=9db827ea-5376-4c17-bf5f-cd378d340d5e] 2026-03-17 00:02:40.843106 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-17 00:02:40.845334 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-17 00:02:40.846035 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-17 00:02:41.102328 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=60beccdc-7082-4611-8aac-0a690746ea64] 2026-03-17 00:02:41.109067 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-17 00:02:41.112703 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-17 00:02:41.112757 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-17 00:02:41.112812 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-17 00:02:41.113978 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-17 00:02:41.120316 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-17 00:02:41.683707 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=d57e211f-948b-44c5-9844-70aeba07b503] 2026-03-17 00:02:41.964672 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=109ca916-7968-43be-ba77-b1a54baa947a] 2026-03-17 00:02:41.994501 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=fb6492fb-d6dc-49a9-913e-36baf5195372] 2026-03-17 00:02:42.003180 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-17 00:02:42.004631 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-17 00:02:42.004684 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-17 00:02:42.006250 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-17 00:02:42.009113 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-17 00:02:42.445645 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=6cfab587-7be8-42c2-b0ea-dc4757d81c9d] 2026-03-17 00:02:42.459596 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-17 00:02:42.475534 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=1da09925-8c01-47fb-a6eb-130a1e6706bd] 2026-03-17 00:02:42.485025 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-17 00:02:42.649722 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=8e2efb55-7a75-469d-bd76-7937187783c7] 2026-03-17 00:02:42.663123 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-17 00:02:42.668749 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=d4c64752-b303-49bc-9711-43d358b6d6b1] 2026-03-17 00:02:42.680891 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-17 00:02:42.874956 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=71c42404-ce15-43dc-93ef-40508710b165] 2026-03-17 00:02:42.885278 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-17 00:02:42.938824 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=e7e7140f-27fa-432d-9c28-bf8d272cb198] 2026-03-17 00:02:43.238487 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=6838e5f2-19c5-446d-a16f-fb18b66e2f15] 2026-03-17 00:02:43.604839 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 3s [id=82c90202-289d-435b-a632-dfc882bb8467] 2026-03-17 00:02:43.948824 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=63da57dc-05f5-46f0-b52d-d5851123c70f] 2026-03-17 00:02:44.014630 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=3c15376c-4668-40ce-996f-c18cfa243e10] 2026-03-17 00:02:44.195673 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=8f2052bc-aaa1-430d-bb9e-446848657f23] 2026-03-17 00:02:44.431566 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ac617e09-6561-4b97-b354-272d563398d6] 2026-03-17 00:02:44.723541 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 4s [id=2ef78273-9fb6-45d9-b20f-0fb74052fb49] 2026-03-17 00:02:45.290134 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=a3225a21-8e8f-4e76-8dab-c7f67a09801b] 2026-03-17 00:02:45.295203 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-17 00:02:45.897136 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 3s [id=66c2d25a-954e-4b07-96f5-c79d86fa48b5] 2026-03-17 00:02:45.924027 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-17 00:02:45.937643 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-17 00:02:45.942327 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-17 00:02:45.944689 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-17 00:02:45.945068 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-17 00:02:45.952615 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-17 00:02:48.083375 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=36babd97-fa33-4edb-9145-dabb7ee1b88f] 2026-03-17 00:02:48.090519 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-17 00:02:48.101953 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-17 00:02:48.103642 | orchestrator | local_file.inventory: Creating... 2026-03-17 00:02:48.873384 | orchestrator | local_file.inventory: Creation complete after 1s [id=33c76e0d0d49332af33766aa01a2612b7853d7c7] 2026-03-17 00:02:48.874922 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 1s [id=6bc617df006f0adead20eb86364ec2602cd35d6d] 2026-03-17 00:02:49.967452 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=36babd97-fa33-4edb-9145-dabb7ee1b88f] 2026-03-17 00:02:55.930386 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-17 00:02:55.941621 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-17 00:02:55.943971 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-17 00:02:55.948450 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-17 00:02:55.948482 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-17 00:02:55.953759 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-17 00:03:05.939223 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-17 00:03:05.942424 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-17 00:03:05.944813 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-17 00:03:05.948954 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-17 00:03:05.949006 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-17 00:03:05.954315 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-17 00:03:15.947614 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-17 00:03:15.947704 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-17 00:03:15.947720 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-17 00:03:15.949924 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-17 00:03:15.949981 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-17 00:03:15.955355 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-17 00:03:16.885354 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=bb476fec-73a1-4e2b-a260-a2d6c31efba5] 2026-03-17 00:03:17.031351 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=b48b7969-c5d1-40a7-b091-ba08f82a74a9] 2026-03-17 00:03:25.956693 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-17 00:03:25.956886 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-17 00:03:25.956918 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-17 00:03:25.956978 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-17 00:03:27.123387 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=aacd0f39-0b7e-4e9a-b7ca-81eb574dc947] 2026-03-17 00:03:27.500608 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 42s [id=c28fc0f5-7ce2-4f21-be21-9ee99f61775a] 2026-03-17 00:03:35.956990 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-17 00:03:35.957104 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-17 00:03:37.571838 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 52s [id=9c7a129e-eef7-46e7-9d1c-804291a67082] 2026-03-17 00:03:45.964877 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-03-17 00:03:47.796505 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m2s [id=6c3b9c96-1e88-4d85-a82e-59c3e5d3a39d] 2026-03-17 00:03:47.819606 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-17 00:03:47.822394 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8724063084613814864] 2026-03-17 00:03:47.827282 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-17 00:03:47.834113 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-17 00:03:47.834198 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-17 00:03:47.834204 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-17 00:03:47.835557 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-17 00:03:47.837692 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-17 00:03:47.843655 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-17 00:03:47.857717 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-17 00:03:47.858612 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-17 00:03:47.872763 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-17 00:03:51.315600 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=c28fc0f5-7ce2-4f21-be21-9ee99f61775a/792a3cd6-8361-4aa2-9d0e-e1d89bff3276] 2026-03-17 00:03:51.322129 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=6c3b9c96-1e88-4d85-a82e-59c3e5d3a39d/23482283-1618-4112-88d0-516e8abcc23d] 2026-03-17 00:03:51.557012 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=6c3b9c96-1e88-4d85-a82e-59c3e5d3a39d/3efb5a56-103b-42d9-8866-8efb8a438184] 2026-03-17 00:03:51.569290 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=c28fc0f5-7ce2-4f21-be21-9ee99f61775a/c89d09f1-caef-4162-a829-09cd388ce865] 2026-03-17 00:03:51.604686 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=9c7a129e-eef7-46e7-9d1c-804291a67082/c18a6eac-daa9-4a49-b877-784985e05b4b] 2026-03-17 00:03:51.664177 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=9c7a129e-eef7-46e7-9d1c-804291a67082/d8c7f886-b638-428f-9acd-2bef6a3abd32] 2026-03-17 00:03:57.688606 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=c28fc0f5-7ce2-4f21-be21-9ee99f61775a/d1d144f4-1f7d-43cf-b529-b5ecced41bc7] 2026-03-17 00:03:57.718420 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=6c3b9c96-1e88-4d85-a82e-59c3e5d3a39d/5cc759d4-bbcf-4791-ab44-d26d1bbabcc1] 2026-03-17 00:03:57.758539 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=9c7a129e-eef7-46e7-9d1c-804291a67082/d717cdad-60c8-49b4-a1ca-e286e86fc235] 2026-03-17 00:03:57.873725 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-17 00:04:07.873975 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-17 00:04:08.726498 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=c0bea588-b91d-4fd5-b800-ee198c243f31] 2026-03-17 00:04:08.756023 | orchestrator | 2026-03-17 00:04:08.756083 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-17 00:04:08.756094 | orchestrator | 2026-03-17 00:04:08.756101 | orchestrator | Outputs: 2026-03-17 00:04:08.756108 | orchestrator | 2026-03-17 00:04:08.756127 | orchestrator | manager_address = 2026-03-17 00:04:08.756134 | orchestrator | private_key = 2026-03-17 00:04:09.117005 | orchestrator | ok: Runtime: 0:01:41.485483 2026-03-17 00:04:09.170673 | 2026-03-17 00:04:09.170807 | TASK [Create infrastructure (stable)] 2026-03-17 00:04:09.702386 | orchestrator | skipping: Conditional result was False 2026-03-17 00:04:09.720785 | 2026-03-17 00:04:09.720972 | TASK [Fetch manager address] 2026-03-17 00:04:10.201589 | orchestrator | ok 2026-03-17 00:04:10.209304 | 2026-03-17 00:04:10.209433 | TASK [Set manager_host address] 2026-03-17 00:04:10.289873 | orchestrator | ok 2026-03-17 00:04:10.300562 | 2026-03-17 00:04:10.300694 | LOOP [Update ansible collections] 2026-03-17 00:04:13.048131 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:04:13.048501 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-17 00:04:13.048555 | orchestrator | Starting galaxy collection install process 2026-03-17 00:04:13.048592 | orchestrator | Process install dependency map 2026-03-17 00:04:13.048623 | orchestrator | Starting collection install process 2026-03-17 00:04:13.048652 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-17 00:04:13.048691 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-17 00:04:13.048734 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-17 00:04:13.048813 | orchestrator | ok: Item: commons Runtime: 0:00:02.374187 2026-03-17 00:04:14.147601 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:04:14.147772 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-17 00:04:14.147824 | orchestrator | Starting galaxy collection install process 2026-03-17 00:04:14.147864 | orchestrator | Process install dependency map 2026-03-17 00:04:14.147901 | orchestrator | Starting collection install process 2026-03-17 00:04:14.147935 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-17 00:04:14.147968 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-17 00:04:14.148002 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-17 00:04:14.148053 | orchestrator | ok: Item: services Runtime: 0:00:00.733181 2026-03-17 00:04:14.165189 | 2026-03-17 00:04:14.165330 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-17 00:04:24.692268 | orchestrator | ok 2026-03-17 00:04:24.700312 | 2026-03-17 00:04:24.700422 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-17 00:05:24.742440 | orchestrator | ok 2026-03-17 00:05:24.750133 | 2026-03-17 00:05:24.750255 | TASK [Fetch manager ssh hostkey] 2026-03-17 00:05:26.325559 | orchestrator | Output suppressed because no_log was given 2026-03-17 00:05:26.339836 | 2026-03-17 00:05:26.339994 | TASK [Get ssh keypair from terraform environment] 2026-03-17 00:05:26.878648 | orchestrator | ok: Runtime: 0:00:00.005988 2026-03-17 00:05:26.887560 | 2026-03-17 00:05:26.887668 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-17 00:05:26.930523 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-17 00:05:26.943553 | 2026-03-17 00:05:26.943776 | TASK [Run manager part 0] 2026-03-17 00:05:27.908687 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:05:27.963211 | orchestrator | 2026-03-17 00:05:27.963262 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-17 00:05:27.963269 | orchestrator | 2026-03-17 00:05:27.963284 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-17 00:05:29.667203 | orchestrator | ok: [testbed-manager] 2026-03-17 00:05:29.667262 | orchestrator | 2026-03-17 00:05:29.667288 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-17 00:05:29.667300 | orchestrator | 2026-03-17 00:05:29.667313 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:05:31.511280 | orchestrator | ok: [testbed-manager] 2026-03-17 00:05:31.511330 | orchestrator | 2026-03-17 00:05:31.511339 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-17 00:05:32.211331 | orchestrator | ok: [testbed-manager] 2026-03-17 00:05:32.211383 | orchestrator | 2026-03-17 00:05:32.211395 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-17 00:05:32.258035 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:32.258077 | orchestrator | 2026-03-17 00:05:32.258090 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-17 00:05:32.286556 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:32.286593 | orchestrator | 2026-03-17 00:05:32.286603 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-17 00:05:32.315129 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:32.315159 | orchestrator | 2026-03-17 00:05:32.315166 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-17 00:05:32.343495 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:32.343531 | orchestrator | 2026-03-17 00:05:32.343540 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-17 00:05:32.369690 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:32.369723 | orchestrator | 2026-03-17 00:05:32.369732 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-17 00:05:32.395902 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:32.395930 | orchestrator | 2026-03-17 00:05:32.395941 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-17 00:05:32.430011 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:05:32.430057 | orchestrator | 2026-03-17 00:05:32.430067 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-17 00:05:33.120032 | orchestrator | changed: [testbed-manager] 2026-03-17 00:05:33.120102 | orchestrator | 2026-03-17 00:05:33.120114 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-17 00:08:25.649242 | orchestrator | changed: [testbed-manager] 2026-03-17 00:08:25.649374 | orchestrator | 2026-03-17 00:08:25.649402 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-17 00:09:48.873331 | orchestrator | changed: [testbed-manager] 2026-03-17 00:09:48.873402 | orchestrator | 2026-03-17 00:09:48.873418 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-17 00:10:08.908753 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:08.908817 | orchestrator | 2026-03-17 00:10:08.908829 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-17 00:10:18.225164 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:18.225270 | orchestrator | 2026-03-17 00:10:18.225289 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-17 00:10:18.269727 | orchestrator | ok: [testbed-manager] 2026-03-17 00:10:18.269790 | orchestrator | 2026-03-17 00:10:18.269799 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-17 00:10:19.099452 | orchestrator | ok: [testbed-manager] 2026-03-17 00:10:19.099494 | orchestrator | 2026-03-17 00:10:19.099506 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-17 00:10:19.855110 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:19.855948 | orchestrator | 2026-03-17 00:10:19.856009 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-17 00:10:26.037976 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:26.038107 | orchestrator | 2026-03-17 00:10:26.038149 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-17 00:10:31.770713 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:31.770778 | orchestrator | 2026-03-17 00:10:31.770792 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-17 00:10:34.316180 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:34.316276 | orchestrator | 2026-03-17 00:10:34.316293 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-17 00:10:38.363064 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:38.363103 | orchestrator | 2026-03-17 00:10:38.363109 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-17 00:10:39.467686 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-17 00:10:39.467731 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-17 00:10:39.467738 | orchestrator | 2026-03-17 00:10:39.467745 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-17 00:10:39.514469 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-17 00:10:39.514546 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-17 00:10:39.514560 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-17 00:10:39.514571 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-17 00:10:46.714350 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-17 00:10:46.714386 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-17 00:10:46.714390 | orchestrator | 2026-03-17 00:10:46.714395 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-17 00:10:47.267008 | orchestrator | changed: [testbed-manager] 2026-03-17 00:10:47.267092 | orchestrator | 2026-03-17 00:10:47.267109 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-17 00:19:08.864040 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-17 00:19:08.864175 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-17 00:19:08.864204 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-17 00:19:08.864224 | orchestrator | 2026-03-17 00:19:08.864243 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-17 00:19:11.204250 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-17 00:19:11.204339 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-17 00:19:11.204354 | orchestrator | 2026-03-17 00:19:11.204367 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-17 00:19:11.204379 | orchestrator | 2026-03-17 00:19:11.204391 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:19:12.615025 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:12.615121 | orchestrator | 2026-03-17 00:19:12.615140 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-17 00:19:12.673069 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:12.673171 | orchestrator | 2026-03-17 00:19:12.673187 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-17 00:19:12.747295 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:12.747337 | orchestrator | 2026-03-17 00:19:12.747345 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-17 00:19:13.557783 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:13.557874 | orchestrator | 2026-03-17 00:19:13.557890 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-17 00:19:14.295741 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:14.295780 | orchestrator | 2026-03-17 00:19:14.295788 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-17 00:19:15.636603 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-17 00:19:15.636645 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-17 00:19:15.636653 | orchestrator | 2026-03-17 00:19:15.636667 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-17 00:19:17.072044 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:17.072193 | orchestrator | 2026-03-17 00:19:17.072223 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-17 00:19:18.878578 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:19:18.878622 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-17 00:19:18.878630 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:19:18.878637 | orchestrator | 2026-03-17 00:19:18.878644 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-17 00:19:18.942567 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:18.942614 | orchestrator | 2026-03-17 00:19:18.942623 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-17 00:19:19.012303 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:19.012344 | orchestrator | 2026-03-17 00:19:19.012354 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-17 00:19:19.579763 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:19.579861 | orchestrator | 2026-03-17 00:19:19.579879 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-17 00:19:19.661815 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:19.661925 | orchestrator | 2026-03-17 00:19:19.661956 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-17 00:19:20.540194 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:19:20.540235 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:20.540244 | orchestrator | 2026-03-17 00:19:20.540250 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-17 00:19:20.572097 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:20.572204 | orchestrator | 2026-03-17 00:19:20.572219 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-17 00:19:20.607191 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:20.607232 | orchestrator | 2026-03-17 00:19:20.607240 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-17 00:19:20.641699 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:20.641739 | orchestrator | 2026-03-17 00:19:20.641750 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-17 00:19:20.710727 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:20.710768 | orchestrator | 2026-03-17 00:19:20.710776 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-17 00:19:21.430180 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:21.430215 | orchestrator | 2026-03-17 00:19:21.430220 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-17 00:19:21.430225 | orchestrator | 2026-03-17 00:19:21.430229 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:19:22.835412 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:22.835447 | orchestrator | 2026-03-17 00:19:22.835453 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-17 00:19:23.840058 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:23.840094 | orchestrator | 2026-03-17 00:19:23.840100 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:19:23.840106 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-17 00:19:23.840111 | orchestrator | 2026-03-17 00:19:24.007588 | orchestrator | ok: Runtime: 0:13:56.670726 2026-03-17 00:19:24.025966 | 2026-03-17 00:19:24.026156 | TASK [Point out that the log in on the manager is now possible] 2026-03-17 00:19:24.076257 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-17 00:19:24.086572 | 2026-03-17 00:19:24.086774 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-17 00:19:24.135769 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-17 00:19:24.145995 | 2026-03-17 00:19:24.146140 | TASK [Run manager part 1 + 2] 2026-03-17 00:19:25.259403 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-17 00:19:25.335413 | orchestrator | 2026-03-17 00:19:25.335497 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-17 00:19:25.335515 | orchestrator | 2026-03-17 00:19:25.335542 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:19:28.268537 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:28.268591 | orchestrator | 2026-03-17 00:19:28.268614 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-17 00:19:28.313237 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:28.313299 | orchestrator | 2026-03-17 00:19:28.313311 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-17 00:19:28.358566 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:28.358617 | orchestrator | 2026-03-17 00:19:28.358627 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-17 00:19:28.410539 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:28.410590 | orchestrator | 2026-03-17 00:19:28.410600 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-17 00:19:28.486067 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:28.486124 | orchestrator | 2026-03-17 00:19:28.486136 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-17 00:19:28.551235 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:28.551290 | orchestrator | 2026-03-17 00:19:28.551300 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-17 00:19:28.602696 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-17 00:19:28.602743 | orchestrator | 2026-03-17 00:19:28.602749 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-17 00:19:29.338978 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:29.339036 | orchestrator | 2026-03-17 00:19:29.339046 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-17 00:19:29.393094 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:29.393164 | orchestrator | 2026-03-17 00:19:29.393173 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-17 00:19:30.737879 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:30.737961 | orchestrator | 2026-03-17 00:19:30.737979 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-17 00:19:31.321377 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:31.321431 | orchestrator | 2026-03-17 00:19:31.321439 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-17 00:19:32.525594 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:32.525655 | orchestrator | 2026-03-17 00:19:32.525670 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-17 00:19:48.406122 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:48.406361 | orchestrator | 2026-03-17 00:19:48.406380 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-17 00:19:49.086799 | orchestrator | ok: [testbed-manager] 2026-03-17 00:19:49.086891 | orchestrator | 2026-03-17 00:19:49.086910 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-17 00:19:49.144579 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:19:49.144639 | orchestrator | 2026-03-17 00:19:49.144647 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-17 00:19:50.074845 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:50.074913 | orchestrator | 2026-03-17 00:19:50.074928 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-17 00:19:51.024708 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:51.024753 | orchestrator | 2026-03-17 00:19:51.024762 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-17 00:19:51.594480 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:51.594546 | orchestrator | 2026-03-17 00:19:51.594561 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-17 00:19:51.633728 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-17 00:19:51.633845 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-17 00:19:51.633862 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-17 00:19:51.633874 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-17 00:19:54.281873 | orchestrator | changed: [testbed-manager] 2026-03-17 00:19:54.281954 | orchestrator | 2026-03-17 00:19:54.281970 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-17 00:20:03.007794 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-17 00:20:03.007844 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-17 00:20:03.007854 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-17 00:20:03.007862 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-17 00:20:03.007873 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-17 00:20:03.007880 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-17 00:20:03.007887 | orchestrator | 2026-03-17 00:20:03.007895 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-17 00:20:04.083518 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:04.083608 | orchestrator | 2026-03-17 00:20:04.083625 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-17 00:20:04.122088 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:20:04.122225 | orchestrator | 2026-03-17 00:20:04.122242 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-17 00:20:07.040449 | orchestrator | changed: [testbed-manager] 2026-03-17 00:20:07.040488 | orchestrator | 2026-03-17 00:20:07.040497 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-17 00:20:07.078009 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:20:07.078069 | orchestrator | 2026-03-17 00:20:07.078077 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-17 00:21:39.448702 | orchestrator | changed: [testbed-manager] 2026-03-17 00:21:39.449481 | orchestrator | 2026-03-17 00:21:39.449520 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-17 00:21:40.456695 | orchestrator | ok: [testbed-manager] 2026-03-17 00:21:40.456775 | orchestrator | 2026-03-17 00:21:40.456790 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:21:40.456802 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-17 00:21:40.456811 | orchestrator | 2026-03-17 00:21:40.774643 | orchestrator | ok: Runtime: 0:02:16.097974 2026-03-17 00:21:40.792206 | 2026-03-17 00:21:40.792343 | TASK [Reboot manager] 2026-03-17 00:21:42.329686 | orchestrator | ok: Runtime: 0:00:00.895209 2026-03-17 00:21:42.352633 | 2026-03-17 00:21:42.352913 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-17 00:21:55.811078 | orchestrator | ok 2026-03-17 00:21:55.820503 | 2026-03-17 00:21:55.820619 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-17 00:22:55.865027 | orchestrator | ok 2026-03-17 00:22:55.874627 | 2026-03-17 00:22:55.874784 | TASK [Deploy manager + bootstrap nodes] 2026-03-17 00:22:58.206415 | orchestrator | 2026-03-17 00:22:58.206650 | orchestrator | # DEPLOY MANAGER 2026-03-17 00:22:58.206676 | orchestrator | 2026-03-17 00:22:58.206691 | orchestrator | + set -e 2026-03-17 00:22:58.206704 | orchestrator | + echo 2026-03-17 00:22:58.206718 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-17 00:22:58.206736 | orchestrator | + echo 2026-03-17 00:22:58.206785 | orchestrator | + cat /opt/manager-vars.sh 2026-03-17 00:22:58.209614 | orchestrator | export NUMBER_OF_NODES=6 2026-03-17 00:22:58.209639 | orchestrator | 2026-03-17 00:22:58.209653 | orchestrator | export CEPH_VERSION=reef 2026-03-17 00:22:58.209667 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-17 00:22:58.209680 | orchestrator | export MANAGER_VERSION=latest 2026-03-17 00:22:58.209702 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-03-17 00:22:58.209713 | orchestrator | 2026-03-17 00:22:58.209731 | orchestrator | export ARA=false 2026-03-17 00:22:58.209743 | orchestrator | export DEPLOY_MODE=manager 2026-03-17 00:22:58.209760 | orchestrator | export TEMPEST=true 2026-03-17 00:22:58.209771 | orchestrator | export IS_ZUUL=true 2026-03-17 00:22:58.209782 | orchestrator | 2026-03-17 00:22:58.209800 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2026-03-17 00:22:58.209812 | orchestrator | export EXTERNAL_API=false 2026-03-17 00:22:58.209822 | orchestrator | 2026-03-17 00:22:58.209833 | orchestrator | export IMAGE_USER=ubuntu 2026-03-17 00:22:58.209847 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-17 00:22:58.209858 | orchestrator | 2026-03-17 00:22:58.209869 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-17 00:22:58.209885 | orchestrator | 2026-03-17 00:22:58.209896 | orchestrator | + echo 2026-03-17 00:22:58.209909 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 00:22:58.210691 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 00:22:58.210712 | orchestrator | ++ INTERACTIVE=false 2026-03-17 00:22:58.210726 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 00:22:58.210739 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 00:22:58.210979 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 00:22:58.211075 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 00:22:58.211093 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 00:22:58.211105 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 00:22:58.211141 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 00:22:58.211166 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 00:22:58.211179 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 00:22:58.211191 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-17 00:22:58.211202 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-17 00:22:58.211213 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-17 00:22:58.211240 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-17 00:22:58.211252 | orchestrator | ++ export ARA=false 2026-03-17 00:22:58.211263 | orchestrator | ++ ARA=false 2026-03-17 00:22:58.211274 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 00:22:58.211285 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 00:22:58.211296 | orchestrator | ++ export TEMPEST=true 2026-03-17 00:22:58.211307 | orchestrator | ++ TEMPEST=true 2026-03-17 00:22:58.211318 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 00:22:58.211329 | orchestrator | ++ IS_ZUUL=true 2026-03-17 00:22:58.211340 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2026-03-17 00:22:58.211351 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2026-03-17 00:22:58.211362 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 00:22:58.211373 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 00:22:58.211384 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 00:22:58.211395 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 00:22:58.211410 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 00:22:58.211422 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 00:22:58.211433 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 00:22:58.211444 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 00:22:58.211455 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-17 00:22:58.269707 | orchestrator | + docker version 2026-03-17 00:22:58.373877 | orchestrator | Client: Docker Engine - Community 2026-03-17 00:22:58.373980 | orchestrator | Version: 27.5.1 2026-03-17 00:22:58.373994 | orchestrator | API version: 1.47 2026-03-17 00:22:58.374008 | orchestrator | Go version: go1.22.11 2026-03-17 00:22:58.374075 | orchestrator | Git commit: 9f9e405 2026-03-17 00:22:58.374088 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-17 00:22:58.374100 | orchestrator | OS/Arch: linux/amd64 2026-03-17 00:22:58.374146 | orchestrator | Context: default 2026-03-17 00:22:58.374158 | orchestrator | 2026-03-17 00:22:58.374169 | orchestrator | Server: Docker Engine - Community 2026-03-17 00:22:58.374181 | orchestrator | Engine: 2026-03-17 00:22:58.374192 | orchestrator | Version: 27.5.1 2026-03-17 00:22:58.374203 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-17 00:22:58.374244 | orchestrator | Go version: go1.22.11 2026-03-17 00:22:58.374255 | orchestrator | Git commit: 4c9b3b0 2026-03-17 00:22:58.374267 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-17 00:22:58.374277 | orchestrator | OS/Arch: linux/amd64 2026-03-17 00:22:58.374288 | orchestrator | Experimental: false 2026-03-17 00:22:58.374299 | orchestrator | containerd: 2026-03-17 00:22:58.374310 | orchestrator | Version: v2.2.2 2026-03-17 00:22:58.374321 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-17 00:22:58.374332 | orchestrator | runc: 2026-03-17 00:22:58.374343 | orchestrator | Version: 1.3.4 2026-03-17 00:22:58.374355 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-17 00:22:58.374365 | orchestrator | docker-init: 2026-03-17 00:22:58.374376 | orchestrator | Version: 0.19.0 2026-03-17 00:22:58.374388 | orchestrator | GitCommit: de40ad0 2026-03-17 00:22:58.376269 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-17 00:22:58.385012 | orchestrator | + set -e 2026-03-17 00:22:58.385061 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 00:22:58.385075 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 00:22:58.385088 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 00:22:58.385099 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 00:22:58.385130 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 00:22:58.385143 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 00:22:58.385154 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 00:22:58.385166 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-17 00:22:58.385177 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-17 00:22:58.385187 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-17 00:22:58.385198 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-17 00:22:58.385209 | orchestrator | ++ export ARA=false 2026-03-17 00:22:58.385220 | orchestrator | ++ ARA=false 2026-03-17 00:22:58.385231 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 00:22:58.385242 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 00:22:58.385252 | orchestrator | ++ export TEMPEST=true 2026-03-17 00:22:58.385263 | orchestrator | ++ TEMPEST=true 2026-03-17 00:22:58.385274 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 00:22:58.385285 | orchestrator | ++ IS_ZUUL=true 2026-03-17 00:22:58.385295 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2026-03-17 00:22:58.385306 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2026-03-17 00:22:58.385317 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 00:22:58.385328 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 00:22:58.385339 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 00:22:58.385349 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 00:22:58.385367 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 00:22:58.385379 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 00:22:58.385390 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 00:22:58.385401 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 00:22:58.385412 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 00:22:58.385423 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 00:22:58.385433 | orchestrator | ++ INTERACTIVE=false 2026-03-17 00:22:58.385444 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 00:22:58.385460 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 00:22:58.385471 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-17 00:22:58.385481 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:22:58.385492 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-17 00:22:58.392285 | orchestrator | + set -e 2026-03-17 00:22:58.392320 | orchestrator | + VERSION=reef 2026-03-17 00:22:58.393021 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:22:58.399165 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-17 00:22:58.399186 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:22:58.403357 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-03-17 00:22:58.409426 | orchestrator | + set -e 2026-03-17 00:22:58.409822 | orchestrator | + VERSION=2025.1 2026-03-17 00:22:58.410705 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:22:58.414251 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-17 00:22:58.414300 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-17 00:22:58.419284 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-17 00:22:58.420028 | orchestrator | ++ semver latest 7.0.0 2026-03-17 00:22:58.479505 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:22:58.479618 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:22:58.479639 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-17 00:22:58.480351 | orchestrator | ++ semver latest 10.0.0-0 2026-03-17 00:22:58.535923 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:22:58.536209 | orchestrator | ++ semver 2025.1 2025.1 2026-03-17 00:22:58.610594 | orchestrator | + [[ 0 -ge 0 ]] 2026-03-17 00:22:58.610703 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-17 00:22:58.617193 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-17 00:22:58.621894 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-17 00:22:58.710635 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-17 00:22:58.711900 | orchestrator | + source /opt/venv/bin/activate 2026-03-17 00:22:58.712620 | orchestrator | ++ deactivate nondestructive 2026-03-17 00:22:58.712663 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:22:58.712686 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:22:58.712714 | orchestrator | ++ hash -r 2026-03-17 00:22:58.712736 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:22:58.712749 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-17 00:22:58.712761 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-17 00:22:58.712774 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-17 00:22:58.713014 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-17 00:22:58.713032 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-17 00:22:58.713048 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-17 00:22:58.713067 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-17 00:22:58.713079 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:22:58.713142 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:22:58.713156 | orchestrator | ++ export PATH 2026-03-17 00:22:58.713167 | orchestrator | ++ '[' -n '' ']' 2026-03-17 00:22:58.713183 | orchestrator | ++ '[' -z '' ']' 2026-03-17 00:22:58.713194 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-17 00:22:58.713205 | orchestrator | ++ PS1='(venv) ' 2026-03-17 00:22:58.713216 | orchestrator | ++ export PS1 2026-03-17 00:22:58.713228 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-17 00:22:58.713239 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-17 00:22:58.713249 | orchestrator | ++ hash -r 2026-03-17 00:22:58.713271 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-17 00:22:59.698151 | orchestrator | 2026-03-17 00:22:59.698260 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-17 00:22:59.698277 | orchestrator | 2026-03-17 00:22:59.698289 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-17 00:23:00.192266 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:00.192356 | orchestrator | 2026-03-17 00:23:00.192368 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-17 00:23:01.067136 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:01.067245 | orchestrator | 2026-03-17 00:23:01.067264 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-17 00:23:01.067277 | orchestrator | 2026-03-17 00:23:01.067288 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:23:03.239969 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:03.240087 | orchestrator | 2026-03-17 00:23:03.240104 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-17 00:23:03.292479 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:03.292581 | orchestrator | 2026-03-17 00:23:03.292597 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-17 00:23:03.690408 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:03.690520 | orchestrator | 2026-03-17 00:23:03.690536 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-17 00:23:03.721309 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:03.722252 | orchestrator | 2026-03-17 00:23:03.722286 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-17 00:23:04.015070 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:04.015246 | orchestrator | 2026-03-17 00:23:04.015266 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-17 00:23:04.311200 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:04.311309 | orchestrator | 2026-03-17 00:23:04.311325 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-17 00:23:04.415209 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:04.415302 | orchestrator | 2026-03-17 00:23:04.415317 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-17 00:23:04.415330 | orchestrator | 2026-03-17 00:23:04.415341 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:23:06.034167 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:06.034290 | orchestrator | 2026-03-17 00:23:06.034318 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-17 00:23:06.129174 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-17 00:23:06.129257 | orchestrator | 2026-03-17 00:23:06.129268 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-17 00:23:06.178261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-17 00:23:06.178362 | orchestrator | 2026-03-17 00:23:06.178378 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-17 00:23:07.169052 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-17 00:23:07.169152 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-17 00:23:07.169159 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-17 00:23:07.169164 | orchestrator | 2026-03-17 00:23:07.169170 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-17 00:23:08.757982 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-17 00:23:08.758179 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-17 00:23:08.758198 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-17 00:23:08.758211 | orchestrator | 2026-03-17 00:23:08.758237 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-17 00:23:09.342214 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:23:09.342314 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:09.342330 | orchestrator | 2026-03-17 00:23:09.342342 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-17 00:23:09.938887 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:23:09.939003 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:09.939021 | orchestrator | 2026-03-17 00:23:09.939033 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-17 00:23:09.996724 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:09.996823 | orchestrator | 2026-03-17 00:23:09.996840 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-17 00:23:10.327808 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:10.327991 | orchestrator | 2026-03-17 00:23:10.328011 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-17 00:23:10.397632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-17 00:23:10.397725 | orchestrator | 2026-03-17 00:23:10.397763 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-17 00:23:11.492479 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:11.492580 | orchestrator | 2026-03-17 00:23:11.492598 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-17 00:23:12.269504 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:12.269593 | orchestrator | 2026-03-17 00:23:12.269605 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-17 00:23:22.534572 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:22.534690 | orchestrator | 2026-03-17 00:23:22.534708 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-17 00:23:22.585318 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:22.585406 | orchestrator | 2026-03-17 00:23:22.585420 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-17 00:23:22.585462 | orchestrator | 2026-03-17 00:23:22.585474 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:23:25.441265 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:25.441362 | orchestrator | 2026-03-17 00:23:25.441377 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-17 00:23:25.550415 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-17 00:23:25.550502 | orchestrator | 2026-03-17 00:23:25.550515 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-17 00:23:25.607906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:23:25.608006 | orchestrator | 2026-03-17 00:23:25.608022 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-17 00:23:27.697918 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:27.697970 | orchestrator | 2026-03-17 00:23:27.697981 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-17 00:23:27.739614 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:27.739711 | orchestrator | 2026-03-17 00:23:27.739728 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-17 00:23:27.849342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-17 00:23:27.849433 | orchestrator | 2026-03-17 00:23:27.849447 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-17 00:23:30.428643 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-17 00:23:30.428748 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-17 00:23:30.428764 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-17 00:23:30.428776 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-17 00:23:30.428787 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-17 00:23:30.428798 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-17 00:23:30.428809 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-17 00:23:30.428820 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-17 00:23:30.428831 | orchestrator | 2026-03-17 00:23:30.428843 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-17 00:23:31.022509 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:31.022609 | orchestrator | 2026-03-17 00:23:31.022624 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-17 00:23:31.657667 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:31.657767 | orchestrator | 2026-03-17 00:23:31.657782 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-17 00:23:31.741924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-17 00:23:31.742076 | orchestrator | 2026-03-17 00:23:31.742097 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-17 00:23:32.930434 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-17 00:23:32.930554 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-17 00:23:32.930571 | orchestrator | 2026-03-17 00:23:32.930584 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-17 00:23:33.544478 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:33.544567 | orchestrator | 2026-03-17 00:23:33.544581 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-17 00:23:33.593534 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:33.593620 | orchestrator | 2026-03-17 00:23:33.593634 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-17 00:23:33.681259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-17 00:23:33.681349 | orchestrator | 2026-03-17 00:23:33.681363 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-17 00:23:34.283725 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:34.283885 | orchestrator | 2026-03-17 00:23:34.283912 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-17 00:23:34.344910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-17 00:23:34.344999 | orchestrator | 2026-03-17 00:23:34.345013 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-17 00:23:35.654540 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:23:35.654644 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:23:35.654660 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:35.654673 | orchestrator | 2026-03-17 00:23:35.654685 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-17 00:23:36.275612 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:36.275707 | orchestrator | 2026-03-17 00:23:36.275723 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-17 00:23:36.336557 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:36.336653 | orchestrator | 2026-03-17 00:23:36.336667 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-17 00:23:36.435071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-17 00:23:36.435192 | orchestrator | 2026-03-17 00:23:36.435207 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-17 00:23:36.924715 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:36.924816 | orchestrator | 2026-03-17 00:23:36.924832 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-17 00:23:37.307814 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:37.307914 | orchestrator | 2026-03-17 00:23:37.307931 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-17 00:23:38.494727 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-17 00:23:38.494845 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-17 00:23:38.494862 | orchestrator | 2026-03-17 00:23:38.494875 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-17 00:23:39.128254 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:39.128352 | orchestrator | 2026-03-17 00:23:39.128367 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-17 00:23:39.507532 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:39.507632 | orchestrator | 2026-03-17 00:23:39.507649 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-17 00:23:39.861379 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:39.861458 | orchestrator | 2026-03-17 00:23:39.861467 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-17 00:23:39.913612 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:39.913727 | orchestrator | 2026-03-17 00:23:39.913750 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-17 00:23:39.992510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-17 00:23:39.992600 | orchestrator | 2026-03-17 00:23:39.992614 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-17 00:23:40.038784 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:40.038875 | orchestrator | 2026-03-17 00:23:40.038889 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-17 00:23:42.023562 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-17 00:23:42.023659 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-17 00:23:42.023675 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-17 00:23:42.023686 | orchestrator | 2026-03-17 00:23:42.023699 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-17 00:23:42.714985 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:42.715075 | orchestrator | 2026-03-17 00:23:42.715090 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-17 00:23:43.410334 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:43.410473 | orchestrator | 2026-03-17 00:23:43.410490 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-17 00:23:44.100669 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:44.100766 | orchestrator | 2026-03-17 00:23:44.100780 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-17 00:23:44.168144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-17 00:23:44.168223 | orchestrator | 2026-03-17 00:23:44.168232 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-17 00:23:44.208971 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:44.209062 | orchestrator | 2026-03-17 00:23:44.209077 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-17 00:23:44.879369 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-17 00:23:44.879475 | orchestrator | 2026-03-17 00:23:44.879492 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-17 00:23:44.965230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-17 00:23:44.965297 | orchestrator | 2026-03-17 00:23:44.965303 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-17 00:23:45.587871 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:45.587940 | orchestrator | 2026-03-17 00:23:45.587947 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-17 00:23:46.143591 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:46.143684 | orchestrator | 2026-03-17 00:23:46.143698 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-17 00:23:46.203913 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:23:46.204007 | orchestrator | 2026-03-17 00:23:46.204022 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-17 00:23:46.251692 | orchestrator | ok: [testbed-manager] 2026-03-17 00:23:46.251768 | orchestrator | 2026-03-17 00:23:46.251778 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-17 00:23:47.089582 | orchestrator | changed: [testbed-manager] 2026-03-17 00:23:47.089677 | orchestrator | 2026-03-17 00:23:47.089692 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-17 00:24:54.210794 | orchestrator | changed: [testbed-manager] 2026-03-17 00:24:54.210912 | orchestrator | 2026-03-17 00:24:54.210929 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-17 00:24:55.115845 | orchestrator | ok: [testbed-manager] 2026-03-17 00:24:55.115944 | orchestrator | 2026-03-17 00:24:55.115960 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-17 00:24:55.170306 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:24:55.170410 | orchestrator | 2026-03-17 00:24:55.170449 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-17 00:25:24.054467 | orchestrator | changed: [testbed-manager] 2026-03-17 00:25:24.054597 | orchestrator | 2026-03-17 00:25:24.054615 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-17 00:25:24.139944 | orchestrator | ok: [testbed-manager] 2026-03-17 00:25:24.140044 | orchestrator | 2026-03-17 00:25:24.140059 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-17 00:25:24.140076 | orchestrator | 2026-03-17 00:25:24.140123 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-17 00:25:24.190181 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:25:24.190287 | orchestrator | 2026-03-17 00:25:24.190305 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-17 00:26:24.239562 | orchestrator | Pausing for 60 seconds 2026-03-17 00:26:24.239685 | orchestrator | changed: [testbed-manager] 2026-03-17 00:26:24.239701 | orchestrator | 2026-03-17 00:26:24.239716 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-17 00:26:27.253976 | orchestrator | changed: [testbed-manager] 2026-03-17 00:26:27.254155 | orchestrator | 2026-03-17 00:26:27.254171 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-17 00:27:29.246402 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-17 00:27:29.246543 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-17 00:27:29.246559 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-17 00:27:29.246571 | orchestrator | changed: [testbed-manager] 2026-03-17 00:27:29.246584 | orchestrator | 2026-03-17 00:27:29.246634 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-17 00:27:34.926353 | orchestrator | changed: [testbed-manager] 2026-03-17 00:27:34.926460 | orchestrator | 2026-03-17 00:27:34.926477 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-17 00:27:35.015671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-17 00:27:35.015761 | orchestrator | 2026-03-17 00:27:35.015777 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-17 00:27:35.015791 | orchestrator | 2026-03-17 00:27:35.015803 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-17 00:27:35.064810 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:27:35.064877 | orchestrator | 2026-03-17 00:27:35.064887 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-17 00:27:35.132124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-17 00:27:35.132211 | orchestrator | 2026-03-17 00:27:35.132225 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-17 00:27:35.920967 | orchestrator | changed: [testbed-manager] 2026-03-17 00:27:35.921145 | orchestrator | 2026-03-17 00:27:35.921163 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-17 00:27:39.149889 | orchestrator | ok: [testbed-manager] 2026-03-17 00:27:39.149989 | orchestrator | 2026-03-17 00:27:39.150007 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-17 00:27:39.230561 | orchestrator | ok: [testbed-manager] => { 2026-03-17 00:27:39.230668 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-17 00:27:39.230684 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-17 00:27:39.230696 | orchestrator | "Checking running containers against expected versions...", 2026-03-17 00:27:39.230709 | orchestrator | "", 2026-03-17 00:27:39.230721 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-17 00:27:39.230733 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-17 00:27:39.230744 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.230755 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-17 00:27:39.230767 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.230778 | orchestrator | "", 2026-03-17 00:27:39.230789 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-17 00:27:39.230801 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-17 00:27:39.230812 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.230823 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-17 00:27:39.230834 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.230845 | orchestrator | "", 2026-03-17 00:27:39.230856 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-17 00:27:39.230867 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-17 00:27:39.230878 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.230889 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-17 00:27:39.230900 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.230912 | orchestrator | "", 2026-03-17 00:27:39.230923 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-17 00:27:39.230934 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-17 00:27:39.230945 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.230984 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-17 00:27:39.230996 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231007 | orchestrator | "", 2026-03-17 00:27:39.231017 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-17 00:27:39.231028 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-03-17 00:27:39.231039 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231078 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-03-17 00:27:39.231090 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231103 | orchestrator | "", 2026-03-17 00:27:39.231116 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-17 00:27:39.231129 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231141 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231153 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231167 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231179 | orchestrator | "", 2026-03-17 00:27:39.231191 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-17 00:27:39.231203 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-17 00:27:39.231215 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231228 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-17 00:27:39.231241 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231253 | orchestrator | "", 2026-03-17 00:27:39.231279 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-17 00:27:39.231292 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-17 00:27:39.231305 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231317 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-17 00:27:39.231329 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231347 | orchestrator | "", 2026-03-17 00:27:39.231359 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-17 00:27:39.231373 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-17 00:27:39.231385 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231395 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-17 00:27:39.231406 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231417 | orchestrator | "", 2026-03-17 00:27:39.231428 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-17 00:27:39.231439 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-17 00:27:39.231450 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231461 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-17 00:27:39.231472 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231483 | orchestrator | "", 2026-03-17 00:27:39.231494 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-17 00:27:39.231505 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231516 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231527 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231538 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231548 | orchestrator | "", 2026-03-17 00:27:39.231559 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-17 00:27:39.231570 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231581 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231592 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231603 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231614 | orchestrator | "", 2026-03-17 00:27:39.231624 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-17 00:27:39.231636 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231646 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231657 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231668 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231688 | orchestrator | "", 2026-03-17 00:27:39.231699 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-17 00:27:39.231710 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231721 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231732 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231743 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231754 | orchestrator | "", 2026-03-17 00:27:39.231765 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-17 00:27:39.231795 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231807 | orchestrator | " Enabled: true", 2026-03-17 00:27:39.231818 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-17 00:27:39.231829 | orchestrator | " Status: ✅ MATCH", 2026-03-17 00:27:39.231840 | orchestrator | "", 2026-03-17 00:27:39.231851 | orchestrator | "=== Summary ===", 2026-03-17 00:27:39.231862 | orchestrator | "Errors (version mismatches): 0", 2026-03-17 00:27:39.231872 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-17 00:27:39.231883 | orchestrator | "", 2026-03-17 00:27:39.231894 | orchestrator | "✅ All running containers match expected versions!" 2026-03-17 00:27:39.231905 | orchestrator | ] 2026-03-17 00:27:39.231917 | orchestrator | } 2026-03-17 00:27:39.231928 | orchestrator | 2026-03-17 00:27:39.231939 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-17 00:27:39.288479 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:27:39.288563 | orchestrator | 2026-03-17 00:27:39.288574 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:27:39.288586 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-17 00:27:39.288595 | orchestrator | 2026-03-17 00:27:39.378807 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-17 00:27:39.378894 | orchestrator | + deactivate 2026-03-17 00:27:39.378908 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-17 00:27:39.378921 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-17 00:27:39.379481 | orchestrator | + export PATH 2026-03-17 00:27:39.379504 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-17 00:27:39.379515 | orchestrator | + '[' -n '' ']' 2026-03-17 00:27:39.379526 | orchestrator | + hash -r 2026-03-17 00:27:39.379537 | orchestrator | + '[' -n '' ']' 2026-03-17 00:27:39.379547 | orchestrator | + unset VIRTUAL_ENV 2026-03-17 00:27:39.379558 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-17 00:27:39.379569 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-17 00:27:39.379580 | orchestrator | + unset -f deactivate 2026-03-17 00:27:39.379591 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-17 00:27:39.386539 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-17 00:27:39.386632 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-17 00:27:39.386644 | orchestrator | + local max_attempts=60 2026-03-17 00:27:39.386654 | orchestrator | + local name=ceph-ansible 2026-03-17 00:27:39.386663 | orchestrator | + local attempt_num=1 2026-03-17 00:27:39.386884 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:27:39.419283 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:27:39.419363 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-17 00:27:39.419376 | orchestrator | + local max_attempts=60 2026-03-17 00:27:39.419387 | orchestrator | + local name=kolla-ansible 2026-03-17 00:27:39.419398 | orchestrator | + local attempt_num=1 2026-03-17 00:27:39.419752 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-17 00:27:39.451762 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:27:39.451862 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-17 00:27:39.451875 | orchestrator | + local max_attempts=60 2026-03-17 00:27:39.451888 | orchestrator | + local name=osism-ansible 2026-03-17 00:27:39.451899 | orchestrator | + local attempt_num=1 2026-03-17 00:27:39.451989 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-17 00:27:39.484271 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:27:39.484361 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-17 00:27:39.484402 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-17 00:27:40.158864 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-17 00:27:40.325904 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-17 00:27:40.326094 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-17 00:27:40.326123 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-17 00:27:40.326138 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-17 00:27:40.326153 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-03-17 00:27:40.326167 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-17 00:27:40.326181 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-17 00:27:40.326221 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-17 00:27:40.326238 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-17 00:27:40.326253 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-17 00:27:40.326268 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-17 00:27:40.326282 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-17 00:27:40.326297 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-17 00:27:40.326311 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-17 00:27:40.326326 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-17 00:27:40.326341 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-17 00:27:40.330894 | orchestrator | ++ semver latest 7.0.0 2026-03-17 00:27:40.374271 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:27:40.374363 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:27:40.374378 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-17 00:27:40.379787 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-17 00:27:52.748872 | orchestrator | 2026-03-17 00:27:52 | INFO  | Prepare task for execution of resolvconf. 2026-03-17 00:27:52.950958 | orchestrator | 2026-03-17 00:27:52 | INFO  | Task c816d19c-d988-4a1f-8eb6-7d179d210a8b (resolvconf) was prepared for execution. 2026-03-17 00:27:52.951143 | orchestrator | 2026-03-17 00:27:52 | INFO  | It takes a moment until task c816d19c-d988-4a1f-8eb6-7d179d210a8b (resolvconf) has been started and output is visible here. 2026-03-17 00:28:05.724497 | orchestrator | 2026-03-17 00:28:05.724624 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-17 00:28:05.724652 | orchestrator | 2026-03-17 00:28:05.724674 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:28:05.724696 | orchestrator | Tuesday 17 March 2026 00:27:55 +0000 (0:00:00.162) 0:00:00.162 ********* 2026-03-17 00:28:05.724710 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:05.724722 | orchestrator | 2026-03-17 00:28:05.724733 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-17 00:28:05.724745 | orchestrator | Tuesday 17 March 2026 00:27:59 +0000 (0:00:03.682) 0:00:03.845 ********* 2026-03-17 00:28:05.724757 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:05.724768 | orchestrator | 2026-03-17 00:28:05.724780 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-17 00:28:05.724791 | orchestrator | Tuesday 17 March 2026 00:27:59 +0000 (0:00:00.053) 0:00:03.898 ********* 2026-03-17 00:28:05.724802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-17 00:28:05.724814 | orchestrator | 2026-03-17 00:28:05.724825 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-17 00:28:05.724847 | orchestrator | Tuesday 17 March 2026 00:27:59 +0000 (0:00:00.074) 0:00:03.973 ********* 2026-03-17 00:28:05.724859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:28:05.724870 | orchestrator | 2026-03-17 00:28:05.724881 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-17 00:28:05.724892 | orchestrator | Tuesday 17 March 2026 00:27:59 +0000 (0:00:00.086) 0:00:04.060 ********* 2026-03-17 00:28:05.724904 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:05.724915 | orchestrator | 2026-03-17 00:28:05.724926 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-17 00:28:05.724937 | orchestrator | Tuesday 17 March 2026 00:28:00 +0000 (0:00:01.188) 0:00:05.248 ********* 2026-03-17 00:28:05.724947 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:05.724958 | orchestrator | 2026-03-17 00:28:05.724969 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-17 00:28:05.724980 | orchestrator | Tuesday 17 March 2026 00:28:00 +0000 (0:00:00.043) 0:00:05.291 ********* 2026-03-17 00:28:05.724991 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:05.725002 | orchestrator | 2026-03-17 00:28:05.725013 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-17 00:28:05.725024 | orchestrator | Tuesday 17 March 2026 00:28:01 +0000 (0:00:00.530) 0:00:05.822 ********* 2026-03-17 00:28:05.725064 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:05.725083 | orchestrator | 2026-03-17 00:28:05.725108 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-17 00:28:05.725134 | orchestrator | Tuesday 17 March 2026 00:28:01 +0000 (0:00:00.074) 0:00:05.897 ********* 2026-03-17 00:28:05.725153 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:05.725172 | orchestrator | 2026-03-17 00:28:05.725191 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-17 00:28:05.725208 | orchestrator | Tuesday 17 March 2026 00:28:02 +0000 (0:00:00.607) 0:00:06.504 ********* 2026-03-17 00:28:05.725226 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:05.725244 | orchestrator | 2026-03-17 00:28:05.725293 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-17 00:28:05.725312 | orchestrator | Tuesday 17 March 2026 00:28:03 +0000 (0:00:01.097) 0:00:07.601 ********* 2026-03-17 00:28:05.725331 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:05.725349 | orchestrator | 2026-03-17 00:28:05.725366 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-17 00:28:05.725384 | orchestrator | Tuesday 17 March 2026 00:28:04 +0000 (0:00:01.016) 0:00:08.618 ********* 2026-03-17 00:28:05.725401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-17 00:28:05.725419 | orchestrator | 2026-03-17 00:28:05.725438 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-17 00:28:05.725456 | orchestrator | Tuesday 17 March 2026 00:28:04 +0000 (0:00:00.083) 0:00:08.702 ********* 2026-03-17 00:28:05.725475 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:05.725493 | orchestrator | 2026-03-17 00:28:05.725511 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:28:05.725531 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:28:05.725550 | orchestrator | 2026-03-17 00:28:05.725569 | orchestrator | 2026-03-17 00:28:05.725587 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:28:05.725607 | orchestrator | Tuesday 17 March 2026 00:28:05 +0000 (0:00:01.194) 0:00:09.896 ********* 2026-03-17 00:28:05.725625 | orchestrator | =============================================================================== 2026-03-17 00:28:05.725644 | orchestrator | Gathering Facts --------------------------------------------------------- 3.68s 2026-03-17 00:28:05.725656 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2026-03-17 00:28:05.725666 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.19s 2026-03-17 00:28:05.725677 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2026-03-17 00:28:05.725690 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2026-03-17 00:28:05.725709 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.61s 2026-03-17 00:28:05.725753 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-03-17 00:28:05.725772 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-03-17 00:28:05.725790 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-17 00:28:05.725807 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-03-17 00:28:05.725824 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-03-17 00:28:05.725853 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-03-17 00:28:05.725873 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.04s 2026-03-17 00:28:05.895964 | orchestrator | + osism apply sshconfig 2026-03-17 00:28:17.234529 | orchestrator | 2026-03-17 00:28:17 | INFO  | Prepare task for execution of sshconfig. 2026-03-17 00:28:17.310143 | orchestrator | 2026-03-17 00:28:17 | INFO  | Task 0869bae7-70f6-4759-b489-0cda28235fdd (sshconfig) was prepared for execution. 2026-03-17 00:28:17.310211 | orchestrator | 2026-03-17 00:28:17 | INFO  | It takes a moment until task 0869bae7-70f6-4759-b489-0cda28235fdd (sshconfig) has been started and output is visible here. 2026-03-17 00:28:28.313362 | orchestrator | 2026-03-17 00:28:28.313477 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-17 00:28:28.313494 | orchestrator | 2026-03-17 00:28:28.313507 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-17 00:28:28.313518 | orchestrator | Tuesday 17 March 2026 00:28:20 +0000 (0:00:00.185) 0:00:00.185 ********* 2026-03-17 00:28:28.313557 | orchestrator | ok: [testbed-manager] 2026-03-17 00:28:28.313569 | orchestrator | 2026-03-17 00:28:28.313581 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-17 00:28:28.313592 | orchestrator | Tuesday 17 March 2026 00:28:21 +0000 (0:00:00.933) 0:00:01.118 ********* 2026-03-17 00:28:28.313603 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:28.313614 | orchestrator | 2026-03-17 00:28:28.313624 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-17 00:28:28.313635 | orchestrator | Tuesday 17 March 2026 00:28:21 +0000 (0:00:00.533) 0:00:01.652 ********* 2026-03-17 00:28:28.313646 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-17 00:28:28.313656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-17 00:28:28.313668 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-17 00:28:28.313678 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-17 00:28:28.313689 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-17 00:28:28.313699 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-17 00:28:28.313710 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-17 00:28:28.313721 | orchestrator | 2026-03-17 00:28:28.313732 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-17 00:28:28.313742 | orchestrator | Tuesday 17 March 2026 00:28:27 +0000 (0:00:05.632) 0:00:07.284 ********* 2026-03-17 00:28:28.313753 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:28:28.313763 | orchestrator | 2026-03-17 00:28:28.313774 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-17 00:28:28.313785 | orchestrator | Tuesday 17 March 2026 00:28:27 +0000 (0:00:00.099) 0:00:07.383 ********* 2026-03-17 00:28:28.313796 | orchestrator | changed: [testbed-manager] 2026-03-17 00:28:28.313807 | orchestrator | 2026-03-17 00:28:28.313818 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:28:28.313829 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:28:28.313841 | orchestrator | 2026-03-17 00:28:28.313851 | orchestrator | 2026-03-17 00:28:28.313862 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:28:28.313873 | orchestrator | Tuesday 17 March 2026 00:28:28 +0000 (0:00:00.545) 0:00:07.929 ********* 2026-03-17 00:28:28.313884 | orchestrator | =============================================================================== 2026-03-17 00:28:28.313895 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.63s 2026-03-17 00:28:28.313907 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.93s 2026-03-17 00:28:28.313920 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-03-17 00:28:28.313932 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-03-17 00:28:28.313944 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-03-17 00:28:28.477397 | orchestrator | + osism apply known-hosts 2026-03-17 00:28:39.896072 | orchestrator | 2026-03-17 00:28:39 | INFO  | Prepare task for execution of known-hosts. 2026-03-17 00:28:39.984717 | orchestrator | 2026-03-17 00:28:39 | INFO  | Task 12afce4f-b7fa-4f26-a46b-f9426dbb0771 (known-hosts) was prepared for execution. 2026-03-17 00:28:39.984859 | orchestrator | 2026-03-17 00:28:39 | INFO  | It takes a moment until task 12afce4f-b7fa-4f26-a46b-f9426dbb0771 (known-hosts) has been started and output is visible here. 2026-03-17 00:28:55.022901 | orchestrator | 2026-03-17 00:28:55.022987 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-17 00:28:55.022998 | orchestrator | 2026-03-17 00:28:55.023005 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-17 00:28:55.023057 | orchestrator | Tuesday 17 March 2026 00:28:43 +0000 (0:00:00.199) 0:00:00.199 ********* 2026-03-17 00:28:55.023066 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-17 00:28:55.023074 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-17 00:28:55.023081 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-17 00:28:55.023089 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-17 00:28:55.023096 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-17 00:28:55.023103 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-17 00:28:55.023120 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-17 00:28:55.023127 | orchestrator | 2026-03-17 00:28:55.023135 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-17 00:28:55.023144 | orchestrator | Tuesday 17 March 2026 00:28:49 +0000 (0:00:06.426) 0:00:06.626 ********* 2026-03-17 00:28:55.023152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-17 00:28:55.023161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-17 00:28:55.023168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-17 00:28:55.023176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-17 00:28:55.023183 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-17 00:28:55.023190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-17 00:28:55.023197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-17 00:28:55.023204 | orchestrator | 2026-03-17 00:28:55.023212 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:28:55.023219 | orchestrator | Tuesday 17 March 2026 00:28:49 +0000 (0:00:00.154) 0:00:06.780 ********* 2026-03-17 00:28:55.023227 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdc1m6uqcRphoWxb3xQyaHch3HPdEaMiHlrHoGC/gqB) 2026-03-17 00:28:55.023238 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdaXcGq9pnHcgSqPRW91TCeUnSHBSeXZUpQgc2Wj1OyDHVObpi2i0UacBxFvaoZMLoz0kOZr6UMZCN3z3KN4EoMdednnq/O+z7vYoPd7kC+ZsDh77qIfs1xJ3wp9EqkMh7Da0NaaqLRBUySI/pfxNxgaQRfE2zhBu1LnminbpW0HjJQFLiB9k6iW/C5ERutRqJ8fQK/IBwypXjfdst7wzIsn+IHanE43oYRLiz8lGGrdgV/JEnjMc2tAg7i46Jp+oxO1W4lQRm4+9R2xjL6yICkq8HQdAwMwJwseAgvRlYBRCMrbZO6hfZv3IBj4NPBJHr9dMKFD4UjP8bV7xZpD+vCFNVuyo2gfAtLWAx6OrCpcwgip6TaROeGEm1SadNr16CgbRcscDM4EmpjG4MDfXaccSl+duy6UINdfeJ81nARLJAfM/CGw6qDBEN89SN0HSH4mLPYnD2ctGbhw9NN5m/i2g5dYKe7iu++ZcQmgJ3YXcoWcY3jRlPRt4wacq4/c8=) 2026-03-17 00:28:55.023249 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBg2fDHs9lape21CUFinj79B34+nxfG+mTl89u52+KCkVrQ7Lm4Ywh8Ad97dGstevX0I9qxZKqpJGpwzMTsIJ5g=) 2026-03-17 00:28:55.023258 | orchestrator | 2026-03-17 00:28:55.023265 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:28:55.023273 | orchestrator | Tuesday 17 March 2026 00:28:50 +0000 (0:00:01.135) 0:00:07.916 ********* 2026-03-17 00:28:55.023302 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChsWZ308Fbjp1k1wHmv01kBCA+9ECnioWRAqj8m7gD5lJ0H5xwcCejjuThqbdjC/SqsgCz8yiqMHl2gCKnf9qx2IPK6VffyXxJ9w/dy2XufY+UyW7AwLPnI2ahX+5TLeOGl8FYVH8PpkDP5q2fCu+b9nHRa2Ze7AVrt3SjaYO64gmMcuWRp+cHXW0zEG4sTPPCAFrCeYBR+0zduX8galgPVkwhyokU+RLkB/CQ6TVc2DyPgdmKibE6/vjk86ybdcLZ+pnd6Vh05+erWXJUBzTocuLtTSVexgCHhIW3HGJOgD0pfgdI3q5EGH0PuYu/zvdLIvn91o8XQVEFm7+kkdP+sEk+AWakN7W0VDOORs2Al2bv/UNVybmAjUlRgMdQFRvRgah5BIVNaPhZmluYiBRLLI+4k2pqnTBhOk7MuoAMZ0AGKP7a0J8Zfab+4LD7aUYVdZeXk49FaJu8zKHvKZAa4uhkPfgEEd5SGxGfpy2M5JM/Wp5nIM4oBh2U6ALNQx8=) 2026-03-17 00:28:55.023312 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdOhDsuIlsrWrlfhhUrrRbIWFwQlwU0jxgSXM0yKp6/+QHNvmjdX9l7LflJbOpeTbSV9WFnX6OV9dpIUFhnpQs=) 2026-03-17 00:28:55.023320 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOrFCJBMoq8hKE/thPdNHCB4R2CPluwX6EX/jT7tlLGI) 2026-03-17 00:28:55.023327 | orchestrator | 2026-03-17 00:28:55.023335 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:28:55.023343 | orchestrator | Tuesday 17 March 2026 00:28:51 +0000 (0:00:00.983) 0:00:08.900 ********* 2026-03-17 00:28:55.023350 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHJwo3ka/EHQuH5EMQy8rUqp5Lc9J3Ni7O7/n/Bc5WlRB5NqLBcyyE0EHnOSW/JGe7Psji2P6hdwcKbxuG/JJ1U=) 2026-03-17 00:28:55.023358 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDz1qNCYAZZtGsYJsPFVKLjGV7oJyHKPLP9uWB7Twztc) 2026-03-17 00:28:55.023405 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjzgCu5ELXsfHOe3Pjfo71nCQbfGoDsDl3OTaXZrgYsSKlR5uK4thMduIscu3Rxfwyl0or9WbzVu7NsQUCdE+3KF9fpo+FyQPNSwcdJYI6FPmyovfWmyhhkXyGFJsmihU2UB9iEfroMgWfMOCixCiglskE11vrpIMRpZwiPTfgScOUnmYrc7o//WF6GF1ZA+bDKK5HSyJ7xgVIRHjPKUn8MCrfP1WcC9VK6EmtU2KZhY4oiJLyHJJr6EcE/jAlrQNIE2wW168zsvSvn1ULoizm5qAINjNQg9yLKBCUxwaYeiDnLX8xB5Uhq0srw1904GHpbZUM18KOseVg19CPpvXhx+b1oj70AgL3kP2slJH79/YaUXVsSmOKvBstmojznxTuCVzaxvVQjYeVANIm6z1oLHgvCepy4oZUoN6DKWpns4N5mlLPUzpAHAzIezk2STBh20vmGKAXXZ9j4dnI0xjbufkyX5hqbSAme9zKdq5TBnVMoNkOosPxB8d7EvY0c+c=) 2026-03-17 00:28:55.023413 | orchestrator | 2026-03-17 00:28:55.023419 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:28:55.023426 | orchestrator | Tuesday 17 March 2026 00:28:52 +0000 (0:00:00.937) 0:00:09.837 ********* 2026-03-17 00:28:55.023433 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAJ+Hi2c9vVGmydISc9iZvkiJdsf3OmosUXanvjA8dxBn2kFXTYeSeq93NsGxF7yd7Pw0VL0ZB/sRKNfP2ZDizc=) 2026-03-17 00:28:55.023441 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD3r16Kqp5bQpnwqTXfAqoEajf3dmaoI6jrsr8RADFh6Lw692vIyFy/k3Cj1LPI90A9AQyjaPxx3Iwu3JlzVyJy5PF7xs3hZtMIYdzJLatqrbKxGi/M5GzxPVawrljji5pCzlfuqsmf7WIMnMO3uwjn6G0RdHt2AMDKP7wE0hZF17P+3eI7SBh+66GcefklNaCTBGMWkm+DJnOVpEjxTwlDJLw6+gc2iAdoe9/EksquIExV1OmrjeRdDbPEoolhqjmmfrEJeLMLPxtyAdlZR/g9azxXJZMHzAoeTJCGSwVsupOKL/0nHHp5iesrtH5mWNo4IiopIVnod1E8aPfeElM8cfB91pcIGT6D49ekC7u8q0yk5VdEpa4V4uMRUJA9to0/XGzdpjb77Y91bPw21wThkwGBHPLxC7bxLWqOW4P3beM9BV1Ujna2bWH5FmQboTacjbl6Gk2fAF7Lmf2zQNUYF0KjvWMdK2HPw4ZsliswHkqXO7KLVewNBn3cOq5H+Ps=) 2026-03-17 00:28:55.023450 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIINBuzvUE+S9zxQ4fk7SA+DipNadvZh31Jy1nhBY5toI) 2026-03-17 00:28:55.023457 | orchestrator | 2026-03-17 00:28:55.023466 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:28:55.023473 | orchestrator | Tuesday 17 March 2026 00:28:53 +0000 (0:00:01.017) 0:00:10.855 ********* 2026-03-17 00:28:55.023481 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUCccdi/K27HMlJXgJ/rzHngv2dFLvelc/W7lRRDM1M) 2026-03-17 00:28:55.023495 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC14zy6WO3bAw9rzasJVlk3dHoTbP04rz5u5xt7Pi/lVu8JFp/EIPqRO5M9ce+IxJR6ihGzp7HYnxYdHZSGAnA8Dh9gNQu7ZoB7iKl2UADTmQhnMQ7fdezXc8G23mSVzj1P9IwtZq0DtdaBvmWsNrbgVm2j0xP3UzvJwZCUMLDjZSMwGQtTDMsMP4p3flvlC7bjRGtR3NLO8QAjfUTNvwTXmAGxkipX9kGCI7NRYFJSVZRAD890tM8Tklzlacv/7h5SKlAG2WaEF74SCg9O0tkLnTXv1uTmngyEvr4BV04fwiHlZRgpzIi5/6/NHRP1GENYAhTh7FlVTyQj3z02oyNi/vOlMR1+6Vc2fnCrsfNr2aFXdgmJ7WruLRJmG/XZTNUXZTfvDdsrw6YMbZbMXLtsK+ZZBYwXAIcgbb5ciYRPf0QKhvJFOjStD20Oqvwufq8DsoW7Yj7w2GCGknwRRdu6cLwXBLSP4M9bYEmtxBjOoNK1Y2E5KvnvpIQms5/bBU0=) 2026-03-17 00:28:55.023504 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG5Z1uGz4zJzFja7s8zmAcVi4kYB/cB2NdRUuIOxAitjGeJB3ckIrPx/m9qcyLwieAZthVYC3Hw+iMe4JUqhHa8=) 2026-03-17 00:28:55.023511 | orchestrator | 2026-03-17 00:28:55.023520 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:28:55.023528 | orchestrator | Tuesday 17 March 2026 00:28:54 +0000 (0:00:00.922) 0:00:11.777 ********* 2026-03-17 00:28:55.023542 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGOrXIRCstPwXOtEZOGhHaqylP6/fJ++VaZhRsdTbnTyvpl6s0a5gRwFNV3i+Mmg3Tkrw/RvlqLuVY+l+jqjGLw=) 2026-03-17 00:29:05.695175 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOCmKKJSYPzUc/YW6S147t2SRi5tN+fchFsTbkEXUHnV) 2026-03-17 00:29:05.695284 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2L0wU7Gw0HncxfQkm8ZIM/Ozdjo+pgM2fZN6MTd9m/1OzzmBEfQvbrsH13lGqZPNeRNRvGz2Iy1to9eYyZu+wfljaUCvm9l/Js1eqDnyUowkY9POHNl6gIlAYICM+2A+C6mL4AcmSghCD+0r8pZ3w4fiahen6X8kxno7mSbzr4GolQD0atXi6fKr66f/3QB4fjlQm6mHaJPn6DNclfd9EpByir8Moh7Dao3dWqcSla0xGu3me8VT89DQ6y54Xd/b5ASMOMgna1DhcVzoULYGoOXYYH7PMyVwv1d8ZcFNucL7g3PqOX7y+Wh5IIUm03eQnO6phscLexCyTCy/c4wHyY0t/RYW/GX2MbaULI0Ek1QjkdwIXQz0sWyP9po5omc3Q5DueHVVhZ+Ly6mlDsA0dMP317vL21Def+RpUzo6L//VKN9kMcExS+SeznUupcYDeNePu6ru3WHWcO3LkKGewKTVrC/jRlAJrJEKpZcJcAohox1APsVbrNRiytUjuMZs=) 2026-03-17 00:29:05.695299 | orchestrator | 2026-03-17 00:29:05.695308 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:29:05.695317 | orchestrator | Tuesday 17 March 2026 00:28:55 +0000 (0:00:00.993) 0:00:12.771 ********* 2026-03-17 00:29:05.695327 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY1tEP6KT+9PBRYm1blU6Mx6hBBb9o/aY6R6YL+Imo4K1SjzO7PSVXl+kioJjG3eDbehyJVYCqamvet0tj+tTJ7xhHatc6dUCOtdEjW03AnRZnAKhAs3nZyW/xFxR452DeiAR0By9B0oXb2VanD+JLqgeTaBUS4CbeX8T3QATE1knV+oHdnER6JWObb/aZ29Wd1oTmxK1rbqM39R+EAM6Wh5+Uovjoaf4MyHPsc3VBl3fQbDjY+0kf8nt8emGuBKZVM+tmU3wjTihuhlRo4HlOwc4Jw7LVciHos3xezRkPFyNnpnb7kRgPuSSaZEFfPHSECzHFgQTIM3J10sny/NFzfMcQB25UJZx+dNyD5w2hPPcRBd3gdfp0NraSorzLyFM9Junb2jbUxWWIsYuC5ta5Q+QUrulTKFurgC0UuSuzLMYgTMyX/vgP2XT7cJ1uKbyYIWmyp0YmVnn3uPSnr1QMO4ubTW/58Rif/IpCkFy5zABfxBJaAn5y8KQXex/k/6M=) 2026-03-17 00:29:05.695344 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPDRQgJkvM5b8LueCsiS1PzFtX31rFjWRP2ysSewOUCV3lXYUFvRnEkdSFOUSfWUvIEnRBcjXAaLS1t3cUY+Tj4=) 2026-03-17 00:29:05.695363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO2ldK6yF6D70abcq7vrM33741HuZwC1oAMQKTM46hq9) 2026-03-17 00:29:05.695389 | orchestrator | 2026-03-17 00:29:05.695411 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-17 00:29:05.695425 | orchestrator | Tuesday 17 March 2026 00:28:56 +0000 (0:00:01.052) 0:00:13.824 ********* 2026-03-17 00:29:05.695437 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-17 00:29:05.695450 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-17 00:29:05.695461 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-17 00:29:05.695501 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-17 00:29:05.695514 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-17 00:29:05.695526 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-17 00:29:05.695555 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-17 00:29:05.695568 | orchestrator | 2026-03-17 00:29:05.695581 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-17 00:29:05.695594 | orchestrator | Tuesday 17 March 2026 00:29:02 +0000 (0:00:05.286) 0:00:19.111 ********* 2026-03-17 00:29:05.695608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-17 00:29:05.695619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-17 00:29:05.695627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-17 00:29:05.695634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-17 00:29:05.695641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-17 00:29:05.695649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-17 00:29:05.695656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-17 00:29:05.695665 | orchestrator | 2026-03-17 00:29:05.695690 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:29:05.695699 | orchestrator | Tuesday 17 March 2026 00:29:02 +0000 (0:00:00.174) 0:00:19.285 ********* 2026-03-17 00:29:05.695708 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdc1m6uqcRphoWxb3xQyaHch3HPdEaMiHlrHoGC/gqB) 2026-03-17 00:29:05.695717 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdaXcGq9pnHcgSqPRW91TCeUnSHBSeXZUpQgc2Wj1OyDHVObpi2i0UacBxFvaoZMLoz0kOZr6UMZCN3z3KN4EoMdednnq/O+z7vYoPd7kC+ZsDh77qIfs1xJ3wp9EqkMh7Da0NaaqLRBUySI/pfxNxgaQRfE2zhBu1LnminbpW0HjJQFLiB9k6iW/C5ERutRqJ8fQK/IBwypXjfdst7wzIsn+IHanE43oYRLiz8lGGrdgV/JEnjMc2tAg7i46Jp+oxO1W4lQRm4+9R2xjL6yICkq8HQdAwMwJwseAgvRlYBRCMrbZO6hfZv3IBj4NPBJHr9dMKFD4UjP8bV7xZpD+vCFNVuyo2gfAtLWAx6OrCpcwgip6TaROeGEm1SadNr16CgbRcscDM4EmpjG4MDfXaccSl+duy6UINdfeJ81nARLJAfM/CGw6qDBEN89SN0HSH4mLPYnD2ctGbhw9NN5m/i2g5dYKe7iu++ZcQmgJ3YXcoWcY3jRlPRt4wacq4/c8=) 2026-03-17 00:29:05.695726 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBg2fDHs9lape21CUFinj79B34+nxfG+mTl89u52+KCkVrQ7Lm4Ywh8Ad97dGstevX0I9qxZKqpJGpwzMTsIJ5g=) 2026-03-17 00:29:05.695735 | orchestrator | 2026-03-17 00:29:05.695743 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:29:05.695752 | orchestrator | Tuesday 17 March 2026 00:29:03 +0000 (0:00:01.055) 0:00:20.340 ********* 2026-03-17 00:29:05.695760 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChsWZ308Fbjp1k1wHmv01kBCA+9ECnioWRAqj8m7gD5lJ0H5xwcCejjuThqbdjC/SqsgCz8yiqMHl2gCKnf9qx2IPK6VffyXxJ9w/dy2XufY+UyW7AwLPnI2ahX+5TLeOGl8FYVH8PpkDP5q2fCu+b9nHRa2Ze7AVrt3SjaYO64gmMcuWRp+cHXW0zEG4sTPPCAFrCeYBR+0zduX8galgPVkwhyokU+RLkB/CQ6TVc2DyPgdmKibE6/vjk86ybdcLZ+pnd6Vh05+erWXJUBzTocuLtTSVexgCHhIW3HGJOgD0pfgdI3q5EGH0PuYu/zvdLIvn91o8XQVEFm7+kkdP+sEk+AWakN7W0VDOORs2Al2bv/UNVybmAjUlRgMdQFRvRgah5BIVNaPhZmluYiBRLLI+4k2pqnTBhOk7MuoAMZ0AGKP7a0J8Zfab+4LD7aUYVdZeXk49FaJu8zKHvKZAa4uhkPfgEEd5SGxGfpy2M5JM/Wp5nIM4oBh2U6ALNQx8=) 2026-03-17 00:29:05.695777 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdOhDsuIlsrWrlfhhUrrRbIWFwQlwU0jxgSXM0yKp6/+QHNvmjdX9l7LflJbOpeTbSV9WFnX6OV9dpIUFhnpQs=) 2026-03-17 00:29:05.695785 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOrFCJBMoq8hKE/thPdNHCB4R2CPluwX6EX/jT7tlLGI) 2026-03-17 00:29:05.695794 | orchestrator | 2026-03-17 00:29:05.695803 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:29:05.695811 | orchestrator | Tuesday 17 March 2026 00:29:04 +0000 (0:00:01.054) 0:00:21.395 ********* 2026-03-17 00:29:05.695820 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDz1qNCYAZZtGsYJsPFVKLjGV7oJyHKPLP9uWB7Twztc) 2026-03-17 00:29:05.695829 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjzgCu5ELXsfHOe3Pjfo71nCQbfGoDsDl3OTaXZrgYsSKlR5uK4thMduIscu3Rxfwyl0or9WbzVu7NsQUCdE+3KF9fpo+FyQPNSwcdJYI6FPmyovfWmyhhkXyGFJsmihU2UB9iEfroMgWfMOCixCiglskE11vrpIMRpZwiPTfgScOUnmYrc7o//WF6GF1ZA+bDKK5HSyJ7xgVIRHjPKUn8MCrfP1WcC9VK6EmtU2KZhY4oiJLyHJJr6EcE/jAlrQNIE2wW168zsvSvn1ULoizm5qAINjNQg9yLKBCUxwaYeiDnLX8xB5Uhq0srw1904GHpbZUM18KOseVg19CPpvXhx+b1oj70AgL3kP2slJH79/YaUXVsSmOKvBstmojznxTuCVzaxvVQjYeVANIm6z1oLHgvCepy4oZUoN6DKWpns4N5mlLPUzpAHAzIezk2STBh20vmGKAXXZ9j4dnI0xjbufkyX5hqbSAme9zKdq5TBnVMoNkOosPxB8d7EvY0c+c=) 2026-03-17 00:29:05.695838 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHJwo3ka/EHQuH5EMQy8rUqp5Lc9J3Ni7O7/n/Bc5WlRB5NqLBcyyE0EHnOSW/JGe7Psji2P6hdwcKbxuG/JJ1U=) 2026-03-17 00:29:05.695846 | orchestrator | 2026-03-17 00:29:05.695854 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:29:05.695863 | orchestrator | Tuesday 17 March 2026 00:29:05 +0000 (0:00:01.021) 0:00:22.417 ********* 2026-03-17 00:29:05.695879 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD3r16Kqp5bQpnwqTXfAqoEajf3dmaoI6jrsr8RADFh6Lw692vIyFy/k3Cj1LPI90A9AQyjaPxx3Iwu3JlzVyJy5PF7xs3hZtMIYdzJLatqrbKxGi/M5GzxPVawrljji5pCzlfuqsmf7WIMnMO3uwjn6G0RdHt2AMDKP7wE0hZF17P+3eI7SBh+66GcefklNaCTBGMWkm+DJnOVpEjxTwlDJLw6+gc2iAdoe9/EksquIExV1OmrjeRdDbPEoolhqjmmfrEJeLMLPxtyAdlZR/g9azxXJZMHzAoeTJCGSwVsupOKL/0nHHp5iesrtH5mWNo4IiopIVnod1E8aPfeElM8cfB91pcIGT6D49ekC7u8q0yk5VdEpa4V4uMRUJA9to0/XGzdpjb77Y91bPw21wThkwGBHPLxC7bxLWqOW4P3beM9BV1Ujna2bWH5FmQboTacjbl6Gk2fAF7Lmf2zQNUYF0KjvWMdK2HPw4ZsliswHkqXO7KLVewNBn3cOq5H+Ps=) 2026-03-17 00:29:10.395681 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIINBuzvUE+S9zxQ4fk7SA+DipNadvZh31Jy1nhBY5toI) 2026-03-17 00:29:10.395783 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAJ+Hi2c9vVGmydISc9iZvkiJdsf3OmosUXanvjA8dxBn2kFXTYeSeq93NsGxF7yd7Pw0VL0ZB/sRKNfP2ZDizc=) 2026-03-17 00:29:10.395801 | orchestrator | 2026-03-17 00:29:10.395814 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:29:10.395826 | orchestrator | Tuesday 17 March 2026 00:29:06 +0000 (0:00:01.026) 0:00:23.444 ********* 2026-03-17 00:29:10.395847 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC14zy6WO3bAw9rzasJVlk3dHoTbP04rz5u5xt7Pi/lVu8JFp/EIPqRO5M9ce+IxJR6ihGzp7HYnxYdHZSGAnA8Dh9gNQu7ZoB7iKl2UADTmQhnMQ7fdezXc8G23mSVzj1P9IwtZq0DtdaBvmWsNrbgVm2j0xP3UzvJwZCUMLDjZSMwGQtTDMsMP4p3flvlC7bjRGtR3NLO8QAjfUTNvwTXmAGxkipX9kGCI7NRYFJSVZRAD890tM8Tklzlacv/7h5SKlAG2WaEF74SCg9O0tkLnTXv1uTmngyEvr4BV04fwiHlZRgpzIi5/6/NHRP1GENYAhTh7FlVTyQj3z02oyNi/vOlMR1+6Vc2fnCrsfNr2aFXdgmJ7WruLRJmG/XZTNUXZTfvDdsrw6YMbZbMXLtsK+ZZBYwXAIcgbb5ciYRPf0QKhvJFOjStD20Oqvwufq8DsoW7Yj7w2GCGknwRRdu6cLwXBLSP4M9bYEmtxBjOoNK1Y2E5KvnvpIQms5/bBU0=) 2026-03-17 00:29:10.395898 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG5Z1uGz4zJzFja7s8zmAcVi4kYB/cB2NdRUuIOxAitjGeJB3ckIrPx/m9qcyLwieAZthVYC3Hw+iMe4JUqhHa8=) 2026-03-17 00:29:10.395911 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUCccdi/K27HMlJXgJ/rzHngv2dFLvelc/W7lRRDM1M) 2026-03-17 00:29:10.395922 | orchestrator | 2026-03-17 00:29:10.395936 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:29:10.395954 | orchestrator | Tuesday 17 March 2026 00:29:07 +0000 (0:00:01.036) 0:00:24.480 ********* 2026-03-17 00:29:10.395965 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGOrXIRCstPwXOtEZOGhHaqylP6/fJ++VaZhRsdTbnTyvpl6s0a5gRwFNV3i+Mmg3Tkrw/RvlqLuVY+l+jqjGLw=) 2026-03-17 00:29:10.395977 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2L0wU7Gw0HncxfQkm8ZIM/Ozdjo+pgM2fZN6MTd9m/1OzzmBEfQvbrsH13lGqZPNeRNRvGz2Iy1to9eYyZu+wfljaUCvm9l/Js1eqDnyUowkY9POHNl6gIlAYICM+2A+C6mL4AcmSghCD+0r8pZ3w4fiahen6X8kxno7mSbzr4GolQD0atXi6fKr66f/3QB4fjlQm6mHaJPn6DNclfd9EpByir8Moh7Dao3dWqcSla0xGu3me8VT89DQ6y54Xd/b5ASMOMgna1DhcVzoULYGoOXYYH7PMyVwv1d8ZcFNucL7g3PqOX7y+Wh5IIUm03eQnO6phscLexCyTCy/c4wHyY0t/RYW/GX2MbaULI0Ek1QjkdwIXQz0sWyP9po5omc3Q5DueHVVhZ+Ly6mlDsA0dMP317vL21Def+RpUzo6L//VKN9kMcExS+SeznUupcYDeNePu6ru3WHWcO3LkKGewKTVrC/jRlAJrJEKpZcJcAohox1APsVbrNRiytUjuMZs=) 2026-03-17 00:29:10.395988 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOCmKKJSYPzUc/YW6S147t2SRi5tN+fchFsTbkEXUHnV) 2026-03-17 00:29:10.395999 | orchestrator | 2026-03-17 00:29:10.396057 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-17 00:29:10.396069 | orchestrator | Tuesday 17 March 2026 00:29:08 +0000 (0:00:00.997) 0:00:25.478 ********* 2026-03-17 00:29:10.396080 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO2ldK6yF6D70abcq7vrM33741HuZwC1oAMQKTM46hq9) 2026-03-17 00:29:10.396092 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCY1tEP6KT+9PBRYm1blU6Mx6hBBb9o/aY6R6YL+Imo4K1SjzO7PSVXl+kioJjG3eDbehyJVYCqamvet0tj+tTJ7xhHatc6dUCOtdEjW03AnRZnAKhAs3nZyW/xFxR452DeiAR0By9B0oXb2VanD+JLqgeTaBUS4CbeX8T3QATE1knV+oHdnER6JWObb/aZ29Wd1oTmxK1rbqM39R+EAM6Wh5+Uovjoaf4MyHPsc3VBl3fQbDjY+0kf8nt8emGuBKZVM+tmU3wjTihuhlRo4HlOwc4Jw7LVciHos3xezRkPFyNnpnb7kRgPuSSaZEFfPHSECzHFgQTIM3J10sny/NFzfMcQB25UJZx+dNyD5w2hPPcRBd3gdfp0NraSorzLyFM9Junb2jbUxWWIsYuC5ta5Q+QUrulTKFurgC0UuSuzLMYgTMyX/vgP2XT7cJ1uKbyYIWmyp0YmVnn3uPSnr1QMO4ubTW/58Rif/IpCkFy5zABfxBJaAn5y8KQXex/k/6M=) 2026-03-17 00:29:10.396103 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPDRQgJkvM5b8LueCsiS1PzFtX31rFjWRP2ysSewOUCV3lXYUFvRnEkdSFOUSfWUvIEnRBcjXAaLS1t3cUY+Tj4=) 2026-03-17 00:29:10.396114 | orchestrator | 2026-03-17 00:29:10.396125 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-17 00:29:10.396136 | orchestrator | Tuesday 17 March 2026 00:29:09 +0000 (0:00:01.043) 0:00:26.522 ********* 2026-03-17 00:29:10.396148 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-17 00:29:10.396159 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-17 00:29:10.396187 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-17 00:29:10.396199 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-17 00:29:10.396212 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-17 00:29:10.396224 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-17 00:29:10.396237 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-17 00:29:10.396258 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:10.396270 | orchestrator | 2026-03-17 00:29:10.396283 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-17 00:29:10.396295 | orchestrator | Tuesday 17 March 2026 00:29:09 +0000 (0:00:00.191) 0:00:26.713 ********* 2026-03-17 00:29:10.396308 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:10.396321 | orchestrator | 2026-03-17 00:29:10.396334 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-17 00:29:10.396345 | orchestrator | Tuesday 17 March 2026 00:29:09 +0000 (0:00:00.049) 0:00:26.763 ********* 2026-03-17 00:29:10.396358 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:29:10.396370 | orchestrator | 2026-03-17 00:29:10.396383 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-17 00:29:10.396395 | orchestrator | Tuesday 17 March 2026 00:29:09 +0000 (0:00:00.049) 0:00:26.812 ********* 2026-03-17 00:29:10.396407 | orchestrator | changed: [testbed-manager] 2026-03-17 00:29:10.396420 | orchestrator | 2026-03-17 00:29:10.396433 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:29:10.396446 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-17 00:29:10.396459 | orchestrator | 2026-03-17 00:29:10.396472 | orchestrator | 2026-03-17 00:29:10.396483 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:29:10.396496 | orchestrator | Tuesday 17 March 2026 00:29:10 +0000 (0:00:00.493) 0:00:27.305 ********* 2026-03-17 00:29:10.396509 | orchestrator | =============================================================================== 2026-03-17 00:29:10.396521 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.43s 2026-03-17 00:29:10.396533 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.29s 2026-03-17 00:29:10.396547 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-17 00:29:10.396559 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-17 00:29:10.396570 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-17 00:29:10.396581 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-03-17 00:29:10.396592 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-17 00:29:10.396603 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-17 00:29:10.396614 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-17 00:29:10.396625 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-17 00:29:10.396636 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-17 00:29:10.396646 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-17 00:29:10.396658 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-17 00:29:10.396675 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-17 00:29:10.396687 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2026-03-17 00:29:10.396698 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-03-17 00:29:10.396709 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2026-03-17 00:29:10.396720 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-03-17 00:29:10.396731 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-03-17 00:29:10.396742 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-03-17 00:29:10.630376 | orchestrator | + osism apply squid 2026-03-17 00:29:21.986372 | orchestrator | 2026-03-17 00:29:21 | INFO  | Prepare task for execution of squid. 2026-03-17 00:29:22.070530 | orchestrator | 2026-03-17 00:29:22 | INFO  | Task cfbf3e43-cfb6-4240-be75-a2655e60aae7 (squid) was prepared for execution. 2026-03-17 00:29:22.070626 | orchestrator | 2026-03-17 00:29:22 | INFO  | It takes a moment until task cfbf3e43-cfb6-4240-be75-a2655e60aae7 (squid) has been started and output is visible here. 2026-03-17 00:31:31.610232 | orchestrator | 2026-03-17 00:31:31.610312 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-17 00:31:31.610319 | orchestrator | 2026-03-17 00:31:31.610324 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-17 00:31:31.610328 | orchestrator | Tuesday 17 March 2026 00:29:25 +0000 (0:00:00.189) 0:00:00.189 ********* 2026-03-17 00:31:31.610333 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:31:31.610338 | orchestrator | 2026-03-17 00:31:31.610342 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-17 00:31:31.610345 | orchestrator | Tuesday 17 March 2026 00:29:25 +0000 (0:00:00.088) 0:00:00.278 ********* 2026-03-17 00:31:31.610349 | orchestrator | ok: [testbed-manager] 2026-03-17 00:31:31.610354 | orchestrator | 2026-03-17 00:31:31.610358 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-17 00:31:31.610362 | orchestrator | Tuesday 17 March 2026 00:29:28 +0000 (0:00:02.518) 0:00:02.796 ********* 2026-03-17 00:31:31.610366 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-17 00:31:31.610370 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-17 00:31:31.610374 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-17 00:31:31.610378 | orchestrator | 2026-03-17 00:31:31.610382 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-17 00:31:31.610385 | orchestrator | Tuesday 17 March 2026 00:29:29 +0000 (0:00:01.298) 0:00:04.095 ********* 2026-03-17 00:31:31.610389 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-17 00:31:31.610394 | orchestrator | 2026-03-17 00:31:31.610397 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-17 00:31:31.610401 | orchestrator | Tuesday 17 March 2026 00:29:30 +0000 (0:00:01.085) 0:00:05.181 ********* 2026-03-17 00:31:31.610405 | orchestrator | ok: [testbed-manager] 2026-03-17 00:31:31.610409 | orchestrator | 2026-03-17 00:31:31.610429 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-17 00:31:31.610436 | orchestrator | Tuesday 17 March 2026 00:29:31 +0000 (0:00:00.342) 0:00:05.523 ********* 2026-03-17 00:31:31.610441 | orchestrator | changed: [testbed-manager] 2026-03-17 00:31:31.610447 | orchestrator | 2026-03-17 00:31:31.610452 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-17 00:31:31.610457 | orchestrator | Tuesday 17 March 2026 00:29:31 +0000 (0:00:00.908) 0:00:06.431 ********* 2026-03-17 00:31:31.610462 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-17 00:31:31.610468 | orchestrator | ok: [testbed-manager] 2026-03-17 00:31:31.610474 | orchestrator | 2026-03-17 00:31:31.610484 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-17 00:31:31.610491 | orchestrator | Tuesday 17 March 2026 00:30:14 +0000 (0:00:42.730) 0:00:49.161 ********* 2026-03-17 00:31:31.610497 | orchestrator | changed: [testbed-manager] 2026-03-17 00:31:31.610503 | orchestrator | 2026-03-17 00:31:31.610509 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-17 00:31:31.610515 | orchestrator | Tuesday 17 March 2026 00:30:30 +0000 (0:00:15.971) 0:01:05.133 ********* 2026-03-17 00:31:31.610521 | orchestrator | Pausing for 60 seconds 2026-03-17 00:31:31.610526 | orchestrator | changed: [testbed-manager] 2026-03-17 00:31:31.610532 | orchestrator | 2026-03-17 00:31:31.610538 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-17 00:31:31.610564 | orchestrator | Tuesday 17 March 2026 00:31:30 +0000 (0:01:00.081) 0:02:05.214 ********* 2026-03-17 00:31:31.610570 | orchestrator | ok: [testbed-manager] 2026-03-17 00:31:31.610576 | orchestrator | 2026-03-17 00:31:31.610584 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-17 00:31:31.610588 | orchestrator | Tuesday 17 March 2026 00:31:30 +0000 (0:00:00.067) 0:02:05.282 ********* 2026-03-17 00:31:31.610591 | orchestrator | changed: [testbed-manager] 2026-03-17 00:31:31.610595 | orchestrator | 2026-03-17 00:31:31.610599 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:31:31.610603 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:31:31.610607 | orchestrator | 2026-03-17 00:31:31.610611 | orchestrator | 2026-03-17 00:31:31.610615 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:31:31.610618 | orchestrator | Tuesday 17 March 2026 00:31:31 +0000 (0:00:00.581) 0:02:05.864 ********* 2026-03-17 00:31:31.610622 | orchestrator | =============================================================================== 2026-03-17 00:31:31.610626 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-17 00:31:31.610629 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 42.73s 2026-03-17 00:31:31.610633 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.97s 2026-03-17 00:31:31.610637 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.52s 2026-03-17 00:31:31.610641 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.30s 2026-03-17 00:31:31.610644 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.09s 2026-03-17 00:31:31.610648 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.91s 2026-03-17 00:31:31.610652 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2026-03-17 00:31:31.610656 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2026-03-17 00:31:31.610659 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-03-17 00:31:31.610663 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-17 00:31:31.766590 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-17 00:31:31.766682 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-17 00:31:31.770426 | orchestrator | + set -e 2026-03-17 00:31:31.770486 | orchestrator | + NAMESPACE=kolla 2026-03-17 00:31:31.770499 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-17 00:31:31.776281 | orchestrator | ++ semver latest 9.0.0 2026-03-17 00:31:31.825463 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-17 00:31:31.825551 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-17 00:31:31.826190 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-17 00:31:43.061659 | orchestrator | 2026-03-17 00:31:43 | INFO  | Prepare task for execution of operator. 2026-03-17 00:31:43.130599 | orchestrator | 2026-03-17 00:31:43 | INFO  | Task 8e2a4de3-7072-4721-8a25-9879984deb10 (operator) was prepared for execution. 2026-03-17 00:31:43.130684 | orchestrator | 2026-03-17 00:31:43 | INFO  | It takes a moment until task 8e2a4de3-7072-4721-8a25-9879984deb10 (operator) has been started and output is visible here. 2026-03-17 00:31:58.321579 | orchestrator | 2026-03-17 00:31:58.321676 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-17 00:31:58.321688 | orchestrator | 2026-03-17 00:31:58.321697 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 00:31:58.321706 | orchestrator | Tuesday 17 March 2026 00:31:46 +0000 (0:00:00.178) 0:00:00.178 ********* 2026-03-17 00:31:58.321715 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:31:58.321725 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:31:58.321733 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:31:58.321816 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:31:58.321826 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:31:58.321837 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:31:58.321845 | orchestrator | 2026-03-17 00:31:58.321852 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-17 00:31:58.321859 | orchestrator | Tuesday 17 March 2026 00:31:49 +0000 (0:00:03.379) 0:00:03.558 ********* 2026-03-17 00:31:58.321866 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:31:58.321873 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:31:58.321879 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:31:58.321886 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:31:58.321892 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:31:58.321899 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:31:58.321906 | orchestrator | 2026-03-17 00:31:58.321925 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-17 00:31:58.322153 | orchestrator | 2026-03-17 00:31:58.322170 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-17 00:31:58.322180 | orchestrator | Tuesday 17 March 2026 00:31:50 +0000 (0:00:00.836) 0:00:04.394 ********* 2026-03-17 00:31:58.322188 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:31:58.322197 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:31:58.322205 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:31:58.322213 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:31:58.322229 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:31:58.322237 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:31:58.322244 | orchestrator | 2026-03-17 00:31:58.322252 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-17 00:31:58.322298 | orchestrator | Tuesday 17 March 2026 00:31:50 +0000 (0:00:00.160) 0:00:04.554 ********* 2026-03-17 00:31:58.322306 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:31:58.322314 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:31:58.322320 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:31:58.322326 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:31:58.322352 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:31:58.322360 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:31:58.322367 | orchestrator | 2026-03-17 00:31:58.322374 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-17 00:31:58.322382 | orchestrator | Tuesday 17 March 2026 00:31:50 +0000 (0:00:00.141) 0:00:04.696 ********* 2026-03-17 00:31:58.322389 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:31:58.322397 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:31:58.322404 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:31:58.322413 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:31:58.322420 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:31:58.322427 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:31:58.322434 | orchestrator | 2026-03-17 00:31:58.322441 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-17 00:31:58.322448 | orchestrator | Tuesday 17 March 2026 00:31:51 +0000 (0:00:00.665) 0:00:05.361 ********* 2026-03-17 00:31:58.322454 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:31:58.322462 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:31:58.322469 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:31:58.322476 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:31:58.322482 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:31:58.322489 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:31:58.322496 | orchestrator | 2026-03-17 00:31:58.322502 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-17 00:31:58.322509 | orchestrator | Tuesday 17 March 2026 00:31:52 +0000 (0:00:00.861) 0:00:06.223 ********* 2026-03-17 00:31:58.322516 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-17 00:31:58.322523 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-17 00:31:58.322530 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-17 00:31:58.322537 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-17 00:31:58.322544 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-17 00:31:58.322565 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-17 00:31:58.322573 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-17 00:31:58.322580 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-17 00:31:58.322588 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-17 00:31:58.322595 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-17 00:31:58.322603 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-17 00:31:58.322609 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-17 00:31:58.322617 | orchestrator | 2026-03-17 00:31:58.322624 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-17 00:31:58.322631 | orchestrator | Tuesday 17 March 2026 00:31:53 +0000 (0:00:01.290) 0:00:07.513 ********* 2026-03-17 00:31:58.322638 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:31:58.322645 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:31:58.322652 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:31:58.322659 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:31:58.322666 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:31:58.322673 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:31:58.322680 | orchestrator | 2026-03-17 00:31:58.322687 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-17 00:31:58.322696 | orchestrator | Tuesday 17 March 2026 00:31:54 +0000 (0:00:01.385) 0:00:08.899 ********* 2026-03-17 00:31:58.322702 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:31:58.322710 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:31:58.322718 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:31:58.322725 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:31:58.322732 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:31:58.322761 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-17 00:31:58.322769 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-17 00:31:58.322777 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-17 00:31:58.322784 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-17 00:31:58.322791 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-17 00:31:58.322798 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-17 00:31:58.322805 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-17 00:31:58.322812 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:31:58.322819 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:31:58.322826 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-17 00:31:58.322840 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-17 00:31:58.322847 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-17 00:31:58.322854 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:31:58.322860 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:31:58.322867 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:31:58.322875 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-17 00:31:58.322882 | orchestrator | 2026-03-17 00:31:58.322889 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-17 00:31:58.322898 | orchestrator | Tuesday 17 March 2026 00:31:56 +0000 (0:00:01.438) 0:00:10.338 ********* 2026-03-17 00:31:58.322904 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:31:58.322911 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:31:58.322917 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:31:58.322985 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:31:58.322995 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:31:58.323002 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:31:58.323008 | orchestrator | 2026-03-17 00:31:58.323015 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-17 00:31:58.323022 | orchestrator | Tuesday 17 March 2026 00:31:56 +0000 (0:00:00.117) 0:00:10.455 ********* 2026-03-17 00:31:58.323029 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:31:58.323035 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:31:58.323041 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:31:58.323048 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:31:58.323054 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:31:58.323060 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:31:58.323067 | orchestrator | 2026-03-17 00:31:58.323073 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-17 00:31:58.323079 | orchestrator | Tuesday 17 March 2026 00:31:56 +0000 (0:00:00.129) 0:00:10.584 ********* 2026-03-17 00:31:58.323085 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:31:58.323092 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:31:58.323099 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:31:58.323105 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:31:58.323112 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:31:58.323119 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:31:58.323126 | orchestrator | 2026-03-17 00:31:58.323131 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-17 00:31:58.323135 | orchestrator | Tuesday 17 March 2026 00:31:57 +0000 (0:00:00.610) 0:00:11.195 ********* 2026-03-17 00:31:58.323140 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:31:58.323144 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:31:58.323148 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:31:58.323153 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:31:58.323157 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:31:58.323161 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:31:58.323165 | orchestrator | 2026-03-17 00:31:58.323170 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-17 00:31:58.323174 | orchestrator | Tuesday 17 March 2026 00:31:57 +0000 (0:00:00.161) 0:00:11.356 ********* 2026-03-17 00:31:58.323178 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-17 00:31:58.323183 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:31:58.323187 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 00:31:58.323191 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 00:31:58.323196 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:31:58.323200 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 00:31:58.323204 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:31:58.323208 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:31:58.323213 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 00:31:58.323217 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-17 00:31:58.323221 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:31:58.323225 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:31:58.323230 | orchestrator | 2026-03-17 00:31:58.323234 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-17 00:31:58.323238 | orchestrator | Tuesday 17 March 2026 00:31:58 +0000 (0:00:00.737) 0:00:12.094 ********* 2026-03-17 00:31:58.323242 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:31:58.323247 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:31:58.323251 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:31:58.323255 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:31:58.323259 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:31:58.323263 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:31:58.323268 | orchestrator | 2026-03-17 00:31:58.323272 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-17 00:31:58.323276 | orchestrator | Tuesday 17 March 2026 00:31:58 +0000 (0:00:00.123) 0:00:12.217 ********* 2026-03-17 00:31:58.323290 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:31:58.323294 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:31:58.323298 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:31:58.323303 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:31:58.323314 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:31:59.448544 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:31:59.448633 | orchestrator | 2026-03-17 00:31:59.448646 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-17 00:31:59.448656 | orchestrator | Tuesday 17 March 2026 00:31:58 +0000 (0:00:00.119) 0:00:12.337 ********* 2026-03-17 00:31:59.448665 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:31:59.448674 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:31:59.448683 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:31:59.448692 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:31:59.448701 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:31:59.448709 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:31:59.448718 | orchestrator | 2026-03-17 00:31:59.448727 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-17 00:31:59.448736 | orchestrator | Tuesday 17 March 2026 00:31:58 +0000 (0:00:00.109) 0:00:12.446 ********* 2026-03-17 00:31:59.448744 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:31:59.448753 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:31:59.448762 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:31:59.448771 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:31:59.448779 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:31:59.448788 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:31:59.448796 | orchestrator | 2026-03-17 00:31:59.448805 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-17 00:31:59.448814 | orchestrator | Tuesday 17 March 2026 00:31:59 +0000 (0:00:00.702) 0:00:13.149 ********* 2026-03-17 00:31:59.448822 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:31:59.448831 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:31:59.448840 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:31:59.448848 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:31:59.448857 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:31:59.448865 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:31:59.448874 | orchestrator | 2026-03-17 00:31:59.448887 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:31:59.448903 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:31:59.449038 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:31:59.449061 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:31:59.449078 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:31:59.449094 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:31:59.449110 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 00:31:59.449126 | orchestrator | 2026-03-17 00:31:59.449142 | orchestrator | 2026-03-17 00:31:59.449157 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:31:59.449169 | orchestrator | Tuesday 17 March 2026 00:31:59 +0000 (0:00:00.176) 0:00:13.326 ********* 2026-03-17 00:31:59.449185 | orchestrator | =============================================================================== 2026-03-17 00:31:59.449229 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2026-03-17 00:31:59.449250 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.44s 2026-03-17 00:31:59.449267 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.39s 2026-03-17 00:31:59.449283 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.29s 2026-03-17 00:31:59.449299 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2026-03-17 00:31:59.449314 | orchestrator | Do not require tty for all users ---------------------------------------- 0.84s 2026-03-17 00:31:59.449329 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2026-03-17 00:31:59.449343 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2026-03-17 00:31:59.449358 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2026-03-17 00:31:59.449373 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-03-17 00:31:59.449387 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.18s 2026-03-17 00:31:59.449402 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-03-17 00:31:59.449417 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-17 00:31:59.449432 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2026-03-17 00:31:59.449447 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.13s 2026-03-17 00:31:59.449462 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.12s 2026-03-17 00:31:59.449477 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.12s 2026-03-17 00:31:59.449492 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.12s 2026-03-17 00:31:59.449504 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.11s 2026-03-17 00:31:59.569719 | orchestrator | + osism apply --environment custom facts 2026-03-17 00:32:00.693783 | orchestrator | 2026-03-17 00:32:00 | INFO  | Trying to run play facts in environment custom 2026-03-17 00:32:10.787083 | orchestrator | 2026-03-17 00:32:10 | INFO  | Prepare task for execution of facts. 2026-03-17 00:32:10.856158 | orchestrator | 2026-03-17 00:32:10 | INFO  | Task 0920ba60-105a-406a-a3e4-aea68394d7f1 (facts) was prepared for execution. 2026-03-17 00:32:10.856271 | orchestrator | 2026-03-17 00:32:10 | INFO  | It takes a moment until task 0920ba60-105a-406a-a3e4-aea68394d7f1 (facts) has been started and output is visible here. 2026-03-17 00:32:55.441853 | orchestrator | 2026-03-17 00:32:55.441999 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-17 00:32:55.442093 | orchestrator | 2026-03-17 00:32:55.442115 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-17 00:32:55.442152 | orchestrator | Tuesday 17 March 2026 00:32:13 +0000 (0:00:00.105) 0:00:00.105 ********* 2026-03-17 00:32:55.442171 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:55.442190 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:55.442209 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:55.442229 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:55.442248 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:55.442267 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:55.442286 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:55.442304 | orchestrator | 2026-03-17 00:32:55.442324 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-17 00:32:55.442343 | orchestrator | Tuesday 17 March 2026 00:32:15 +0000 (0:00:01.358) 0:00:01.464 ********* 2026-03-17 00:32:55.442362 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:55.442383 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:55.442403 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:32:55.442450 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:32:55.442470 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:32:55.442490 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:55.442509 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:55.442529 | orchestrator | 2026-03-17 00:32:55.442550 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-17 00:32:55.442569 | orchestrator | 2026-03-17 00:32:55.442587 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-17 00:32:55.442606 | orchestrator | Tuesday 17 March 2026 00:32:16 +0000 (0:00:01.138) 0:00:02.602 ********* 2026-03-17 00:32:55.442626 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:55.442645 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:55.442665 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:55.442684 | orchestrator | 2026-03-17 00:32:55.442700 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-17 00:32:55.442719 | orchestrator | Tuesday 17 March 2026 00:32:16 +0000 (0:00:00.096) 0:00:02.699 ********* 2026-03-17 00:32:55.442735 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:55.442750 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:55.442767 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:55.442782 | orchestrator | 2026-03-17 00:32:55.442799 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-17 00:32:55.442818 | orchestrator | Tuesday 17 March 2026 00:32:16 +0000 (0:00:00.182) 0:00:02.881 ********* 2026-03-17 00:32:55.442837 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:55.442856 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:55.442875 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:55.443016 | orchestrator | 2026-03-17 00:32:55.443038 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-17 00:32:55.443057 | orchestrator | Tuesday 17 March 2026 00:32:16 +0000 (0:00:00.202) 0:00:03.083 ********* 2026-03-17 00:32:55.443075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:32:55.443093 | orchestrator | 2026-03-17 00:32:55.443108 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-17 00:32:55.443123 | orchestrator | Tuesday 17 March 2026 00:32:16 +0000 (0:00:00.129) 0:00:03.213 ********* 2026-03-17 00:32:55.443139 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:55.443153 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:55.443170 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:55.443187 | orchestrator | 2026-03-17 00:32:55.443204 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-17 00:32:55.443220 | orchestrator | Tuesday 17 March 2026 00:32:17 +0000 (0:00:00.426) 0:00:03.640 ********* 2026-03-17 00:32:55.443236 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:55.443252 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:55.443269 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:55.443286 | orchestrator | 2026-03-17 00:32:55.443303 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-17 00:32:55.443321 | orchestrator | Tuesday 17 March 2026 00:32:17 +0000 (0:00:00.104) 0:00:03.745 ********* 2026-03-17 00:32:55.443336 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:55.443352 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:55.443369 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:55.443384 | orchestrator | 2026-03-17 00:32:55.443399 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-17 00:32:55.443416 | orchestrator | Tuesday 17 March 2026 00:32:18 +0000 (0:00:01.087) 0:00:04.832 ********* 2026-03-17 00:32:55.443434 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:55.443450 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:55.443466 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:55.443482 | orchestrator | 2026-03-17 00:32:55.443500 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-17 00:32:55.443518 | orchestrator | Tuesday 17 March 2026 00:32:18 +0000 (0:00:00.491) 0:00:05.324 ********* 2026-03-17 00:32:55.443552 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:55.443568 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:55.443585 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:55.443603 | orchestrator | 2026-03-17 00:32:55.443621 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-17 00:32:55.443639 | orchestrator | Tuesday 17 March 2026 00:32:19 +0000 (0:00:01.059) 0:00:06.383 ********* 2026-03-17 00:32:55.443657 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:55.443675 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:55.443691 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:55.443707 | orchestrator | 2026-03-17 00:32:55.443722 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-17 00:32:55.443737 | orchestrator | Tuesday 17 March 2026 00:32:36 +0000 (0:00:16.930) 0:00:23.314 ********* 2026-03-17 00:32:55.443752 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:32:55.443768 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:32:55.443782 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:32:55.443795 | orchestrator | 2026-03-17 00:32:55.443835 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-17 00:32:55.443880 | orchestrator | Tuesday 17 March 2026 00:32:37 +0000 (0:00:00.119) 0:00:23.433 ********* 2026-03-17 00:32:55.443921 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:32:55.443938 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:32:55.443954 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:32:55.443969 | orchestrator | 2026-03-17 00:32:55.443984 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-17 00:32:55.443999 | orchestrator | Tuesday 17 March 2026 00:32:45 +0000 (0:00:08.826) 0:00:32.260 ********* 2026-03-17 00:32:55.444015 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:55.444030 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:55.444045 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:55.444058 | orchestrator | 2026-03-17 00:32:55.444074 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-17 00:32:55.444088 | orchestrator | Tuesday 17 March 2026 00:32:46 +0000 (0:00:00.504) 0:00:32.765 ********* 2026-03-17 00:32:55.444103 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-17 00:32:55.444119 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-17 00:32:55.444133 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-17 00:32:55.444148 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-17 00:32:55.444163 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-17 00:32:55.444178 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-17 00:32:55.444194 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-17 00:32:55.444210 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-17 00:32:55.444226 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-17 00:32:55.444242 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-17 00:32:55.444259 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-17 00:32:55.444275 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-17 00:32:55.444291 | orchestrator | 2026-03-17 00:32:55.444307 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-17 00:32:55.444323 | orchestrator | Tuesday 17 March 2026 00:32:50 +0000 (0:00:03.934) 0:00:36.699 ********* 2026-03-17 00:32:55.444339 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:55.444356 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:55.444372 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:55.444389 | orchestrator | 2026-03-17 00:32:55.444406 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:32:55.444440 | orchestrator | 2026-03-17 00:32:55.444457 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:32:55.444476 | orchestrator | Tuesday 17 March 2026 00:32:51 +0000 (0:00:01.449) 0:00:38.149 ********* 2026-03-17 00:32:55.444493 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:32:55.444509 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:32:55.444523 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:32:55.444534 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:32:55.444545 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:32:55.444557 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:32:55.444638 | orchestrator | ok: [testbed-manager] 2026-03-17 00:32:55.444655 | orchestrator | 2026-03-17 00:32:55.444670 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:32:55.444685 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:32:55.444701 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:32:55.444717 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:32:55.444732 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:32:55.444745 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:32:55.444758 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:32:55.444770 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:32:55.444783 | orchestrator | 2026-03-17 00:32:55.444795 | orchestrator | 2026-03-17 00:32:55.444807 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:32:55.444820 | orchestrator | Tuesday 17 March 2026 00:32:55 +0000 (0:00:03.694) 0:00:41.844 ********* 2026-03-17 00:32:55.444832 | orchestrator | =============================================================================== 2026-03-17 00:32:55.444843 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.93s 2026-03-17 00:32:55.444856 | orchestrator | Install required packages (Debian) -------------------------------------- 8.83s 2026-03-17 00:32:55.444868 | orchestrator | Copy fact files --------------------------------------------------------- 3.93s 2026-03-17 00:32:55.444881 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.69s 2026-03-17 00:32:55.444917 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.45s 2026-03-17 00:32:55.444933 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2026-03-17 00:32:55.444962 | orchestrator | Copy fact file ---------------------------------------------------------- 1.14s 2026-03-17 00:32:55.616556 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.09s 2026-03-17 00:32:55.616663 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-03-17 00:32:55.616682 | orchestrator | Create custom facts directory ------------------------------------------- 0.50s 2026-03-17 00:32:55.616689 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2026-03-17 00:32:55.616696 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-03-17 00:32:55.616703 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-03-17 00:32:55.616709 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-03-17 00:32:55.616716 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-03-17 00:32:55.616740 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-03-17 00:32:55.616747 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2026-03-17 00:32:55.616754 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-03-17 00:32:55.780849 | orchestrator | + osism apply bootstrap 2026-03-17 00:33:07.262270 | orchestrator | 2026-03-17 00:33:07 | INFO  | Prepare task for execution of bootstrap. 2026-03-17 00:33:07.335006 | orchestrator | 2026-03-17 00:33:07 | INFO  | Task 2a7c1c87-21c3-492e-8472-7ec6ef908bc0 (bootstrap) was prepared for execution. 2026-03-17 00:33:07.335100 | orchestrator | 2026-03-17 00:33:07 | INFO  | It takes a moment until task 2a7c1c87-21c3-492e-8472-7ec6ef908bc0 (bootstrap) has been started and output is visible here. 2026-03-17 00:33:23.225431 | orchestrator | 2026-03-17 00:33:23.225539 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-17 00:33:23.225553 | orchestrator | 2026-03-17 00:33:23.225563 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-17 00:33:23.225572 | orchestrator | Tuesday 17 March 2026 00:33:10 +0000 (0:00:00.191) 0:00:00.192 ********* 2026-03-17 00:33:23.225581 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:23.225591 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:23.225599 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:23.225608 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:23.225618 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:23.225626 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:23.225635 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:23.225644 | orchestrator | 2026-03-17 00:33:23.225652 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:33:23.225661 | orchestrator | 2026-03-17 00:33:23.225671 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:33:23.225680 | orchestrator | Tuesday 17 March 2026 00:33:10 +0000 (0:00:00.296) 0:00:00.488 ********* 2026-03-17 00:33:23.225689 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:23.225697 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:23.225706 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:23.225715 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:23.225723 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:23.225732 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:23.225740 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:23.225749 | orchestrator | 2026-03-17 00:33:23.225758 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-17 00:33:23.225766 | orchestrator | 2026-03-17 00:33:23.225775 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:33:23.225784 | orchestrator | Tuesday 17 March 2026 00:33:15 +0000 (0:00:04.743) 0:00:05.231 ********* 2026-03-17 00:33:23.225794 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-17 00:33:23.225803 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-17 00:33:23.225812 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-17 00:33:23.225820 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-17 00:33:23.225829 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 00:33:23.225837 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-17 00:33:23.225846 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-17 00:33:23.225854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 00:33:23.225943 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-17 00:33:23.225954 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-17 00:33:23.225962 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 00:33:23.225972 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-17 00:33:23.226006 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-17 00:33:23.226065 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-17 00:33:23.226077 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-17 00:33:23.226087 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-17 00:33:23.226096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-17 00:33:23.226106 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-17 00:33:23.226116 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-17 00:33:23.226126 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-17 00:33:23.226136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-17 00:33:23.226146 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-17 00:33:23.226156 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:33:23.226166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 00:33:23.226176 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:33:23.226186 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-17 00:33:23.226207 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-17 00:33:23.226217 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-17 00:33:23.226225 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-17 00:33:23.226234 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-17 00:33:23.226243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 00:33:23.226251 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:33:23.226260 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-17 00:33:23.226269 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-17 00:33:23.226278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 00:33:23.226286 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-17 00:33:23.226295 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-17 00:33:23.226304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 00:33:23.226313 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-17 00:33:23.226322 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-17 00:33:23.226330 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:33:23.226339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 00:33:23.226348 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-17 00:33:23.226357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-17 00:33:23.226365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 00:33:23.226374 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:33:23.226383 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-17 00:33:23.226408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-17 00:33:23.226418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-17 00:33:23.226427 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-17 00:33:23.226435 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-17 00:33:23.226444 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:33:23.226453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-17 00:33:23.226462 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-17 00:33:23.226470 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-17 00:33:23.226479 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:33:23.226488 | orchestrator | 2026-03-17 00:33:23.226496 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-17 00:33:23.226505 | orchestrator | 2026-03-17 00:33:23.226514 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-17 00:33:23.226576 | orchestrator | Tuesday 17 March 2026 00:33:16 +0000 (0:00:00.479) 0:00:05.711 ********* 2026-03-17 00:33:23.226586 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:23.226595 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:23.226603 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:23.226612 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:23.226621 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:23.226629 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:23.226638 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:23.226647 | orchestrator | 2026-03-17 00:33:23.226656 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-17 00:33:23.226665 | orchestrator | Tuesday 17 March 2026 00:33:17 +0000 (0:00:01.314) 0:00:07.025 ********* 2026-03-17 00:33:23.226673 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:23.226682 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:23.226691 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:23.226700 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:23.226709 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:23.226717 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:23.226726 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:23.226734 | orchestrator | 2026-03-17 00:33:23.226743 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-17 00:33:23.226752 | orchestrator | Tuesday 17 March 2026 00:33:18 +0000 (0:00:01.364) 0:00:08.390 ********* 2026-03-17 00:33:23.226762 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:33:23.226773 | orchestrator | 2026-03-17 00:33:23.226782 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-17 00:33:23.226791 | orchestrator | Tuesday 17 March 2026 00:33:19 +0000 (0:00:00.290) 0:00:08.680 ********* 2026-03-17 00:33:23.226800 | orchestrator | changed: [testbed-manager] 2026-03-17 00:33:23.226809 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:33:23.226817 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:33:23.226826 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:33:23.226835 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:33:23.226844 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:33:23.226852 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:33:23.226861 | orchestrator | 2026-03-17 00:33:23.226888 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-17 00:33:23.226897 | orchestrator | Tuesday 17 March 2026 00:33:20 +0000 (0:00:01.581) 0:00:10.262 ********* 2026-03-17 00:33:23.226906 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:33:23.226916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:33:23.226927 | orchestrator | 2026-03-17 00:33:23.226935 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-17 00:33:23.226944 | orchestrator | Tuesday 17 March 2026 00:33:20 +0000 (0:00:00.268) 0:00:10.530 ********* 2026-03-17 00:33:23.226953 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:33:23.226961 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:33:23.226970 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:33:23.226979 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:33:23.226988 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:33:23.226996 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:33:23.227005 | orchestrator | 2026-03-17 00:33:23.227021 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-17 00:33:23.227030 | orchestrator | Tuesday 17 March 2026 00:33:22 +0000 (0:00:01.143) 0:00:11.674 ********* 2026-03-17 00:33:23.227039 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:33:23.227048 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:33:23.227063 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:33:23.227072 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:33:23.227080 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:33:23.227089 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:33:23.227098 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:33:23.227106 | orchestrator | 2026-03-17 00:33:23.227115 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-17 00:33:23.227124 | orchestrator | Tuesday 17 March 2026 00:33:22 +0000 (0:00:00.586) 0:00:12.260 ********* 2026-03-17 00:33:23.227132 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:33:23.227141 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:33:23.227150 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:33:23.227159 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:33:23.227167 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:33:23.227176 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:33:23.227185 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:23.227193 | orchestrator | 2026-03-17 00:33:23.227202 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-17 00:33:23.227212 | orchestrator | Tuesday 17 March 2026 00:33:23 +0000 (0:00:00.428) 0:00:12.689 ********* 2026-03-17 00:33:23.227221 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:33:23.227229 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:33:23.227244 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:33:35.092497 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:33:35.092590 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:33:35.092600 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:33:35.092607 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:33:35.092615 | orchestrator | 2026-03-17 00:33:35.092623 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-17 00:33:35.092632 | orchestrator | Tuesday 17 March 2026 00:33:23 +0000 (0:00:00.200) 0:00:12.890 ********* 2026-03-17 00:33:35.092640 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:33:35.092660 | orchestrator | 2026-03-17 00:33:35.092667 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-17 00:33:35.092675 | orchestrator | Tuesday 17 March 2026 00:33:23 +0000 (0:00:00.289) 0:00:13.179 ********* 2026-03-17 00:33:35.092682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:33:35.092688 | orchestrator | 2026-03-17 00:33:35.092695 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-17 00:33:35.092702 | orchestrator | Tuesday 17 March 2026 00:33:23 +0000 (0:00:00.296) 0:00:13.475 ********* 2026-03-17 00:33:35.092709 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.092717 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:35.092724 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:35.092731 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:35.092737 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:35.092744 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:35.092750 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.092757 | orchestrator | 2026-03-17 00:33:35.092764 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-17 00:33:35.092771 | orchestrator | Tuesday 17 March 2026 00:33:25 +0000 (0:00:01.336) 0:00:14.812 ********* 2026-03-17 00:33:35.092778 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:33:35.092784 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:33:35.092791 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:33:35.092798 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:33:35.092804 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:33:35.092832 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:33:35.092840 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:33:35.092846 | orchestrator | 2026-03-17 00:33:35.092896 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-17 00:33:35.092903 | orchestrator | Tuesday 17 March 2026 00:33:25 +0000 (0:00:00.208) 0:00:15.021 ********* 2026-03-17 00:33:35.092910 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.092916 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:35.092923 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:35.092930 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:35.092937 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:35.092944 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:35.092951 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.092957 | orchestrator | 2026-03-17 00:33:35.092964 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-17 00:33:35.092971 | orchestrator | Tuesday 17 March 2026 00:33:26 +0000 (0:00:00.574) 0:00:15.595 ********* 2026-03-17 00:33:35.092978 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:33:35.092984 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:33:35.092991 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:33:35.092998 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:33:35.093005 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:33:35.093012 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:33:35.093018 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:33:35.093025 | orchestrator | 2026-03-17 00:33:35.093033 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-17 00:33:35.093048 | orchestrator | Tuesday 17 March 2026 00:33:26 +0000 (0:00:00.262) 0:00:15.857 ********* 2026-03-17 00:33:35.093073 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.093087 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:33:35.093101 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:33:35.093113 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:33:35.093126 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:33:35.093139 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:33:35.093148 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:33:35.093154 | orchestrator | 2026-03-17 00:33:35.093160 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-17 00:33:35.093166 | orchestrator | Tuesday 17 March 2026 00:33:26 +0000 (0:00:00.594) 0:00:16.452 ********* 2026-03-17 00:33:35.093178 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.093192 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:33:35.093206 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:33:35.093221 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:33:35.093234 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:33:35.093243 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:33:35.093249 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:33:35.093256 | orchestrator | 2026-03-17 00:33:35.093263 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-17 00:33:35.093270 | orchestrator | Tuesday 17 March 2026 00:33:28 +0000 (0:00:01.252) 0:00:17.705 ********* 2026-03-17 00:33:35.093276 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.093283 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.093289 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:35.093296 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:35.093302 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:35.093309 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:35.093315 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:35.093321 | orchestrator | 2026-03-17 00:33:35.093328 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-17 00:33:35.093335 | orchestrator | Tuesday 17 March 2026 00:33:29 +0000 (0:00:01.135) 0:00:18.841 ********* 2026-03-17 00:33:35.093357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:33:35.093372 | orchestrator | 2026-03-17 00:33:35.093379 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-17 00:33:35.093386 | orchestrator | Tuesday 17 March 2026 00:33:29 +0000 (0:00:00.314) 0:00:19.155 ********* 2026-03-17 00:33:35.093393 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:33:35.093399 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:33:35.093406 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:33:35.093413 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:33:35.093420 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:33:35.093426 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:33:35.093433 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:33:35.093440 | orchestrator | 2026-03-17 00:33:35.093447 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-17 00:33:35.093454 | orchestrator | Tuesday 17 March 2026 00:33:30 +0000 (0:00:01.254) 0:00:20.410 ********* 2026-03-17 00:33:35.093461 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.093467 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:35.093474 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:35.093481 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:35.093488 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.093495 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:35.093501 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:35.093508 | orchestrator | 2026-03-17 00:33:35.093515 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-17 00:33:35.093522 | orchestrator | Tuesday 17 March 2026 00:33:31 +0000 (0:00:00.209) 0:00:20.619 ********* 2026-03-17 00:33:35.093529 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.093536 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:35.093542 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:35.093549 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:35.093556 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.093563 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:35.093570 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:35.093576 | orchestrator | 2026-03-17 00:33:35.093583 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-17 00:33:35.093590 | orchestrator | Tuesday 17 March 2026 00:33:31 +0000 (0:00:00.215) 0:00:20.835 ********* 2026-03-17 00:33:35.093597 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.093603 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:35.093610 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:35.093617 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:35.093624 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.093630 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:35.093637 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:35.093643 | orchestrator | 2026-03-17 00:33:35.093650 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-17 00:33:35.093657 | orchestrator | Tuesday 17 March 2026 00:33:31 +0000 (0:00:00.205) 0:00:21.040 ********* 2026-03-17 00:33:35.093683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:33:35.093692 | orchestrator | 2026-03-17 00:33:35.093699 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-17 00:33:35.093704 | orchestrator | Tuesday 17 March 2026 00:33:31 +0000 (0:00:00.295) 0:00:21.336 ********* 2026-03-17 00:33:35.093711 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.093717 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:35.093724 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:35.093730 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.093736 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:35.093743 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:35.093750 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:35.093757 | orchestrator | 2026-03-17 00:33:35.093769 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-17 00:33:35.093776 | orchestrator | Tuesday 17 March 2026 00:33:32 +0000 (0:00:00.542) 0:00:21.878 ********* 2026-03-17 00:33:35.093783 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:33:35.093789 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:33:35.093797 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:33:35.093804 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:33:35.093811 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:33:35.093818 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:33:35.093825 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:33:35.093831 | orchestrator | 2026-03-17 00:33:35.093838 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-17 00:33:35.093844 | orchestrator | Tuesday 17 March 2026 00:33:32 +0000 (0:00:00.226) 0:00:22.104 ********* 2026-03-17 00:33:35.093863 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.093871 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:33:35.093878 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:33:35.093884 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.093891 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:33:35.093897 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:35.093905 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:35.093912 | orchestrator | 2026-03-17 00:33:35.093919 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-17 00:33:35.093927 | orchestrator | Tuesday 17 March 2026 00:33:33 +0000 (0:00:01.036) 0:00:23.141 ********* 2026-03-17 00:33:35.093934 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.093941 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:33:35.093948 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:33:35.093956 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:33:35.093963 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.093970 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:33:35.093978 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:33:35.093985 | orchestrator | 2026-03-17 00:33:35.093992 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-17 00:33:35.093998 | orchestrator | Tuesday 17 March 2026 00:33:34 +0000 (0:00:00.568) 0:00:23.710 ********* 2026-03-17 00:33:35.094006 | orchestrator | ok: [testbed-manager] 2026-03-17 00:33:35.094012 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:33:35.094069 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:33:35.094077 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:33:35.094091 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.303529 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:15.303685 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.303710 | orchestrator | 2026-03-17 00:34:15.303730 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-17 00:34:15.303752 | orchestrator | Tuesday 17 March 2026 00:33:35 +0000 (0:00:01.039) 0:00:24.749 ********* 2026-03-17 00:34:15.303771 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.303791 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.303890 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.303910 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:15.303928 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:15.303945 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:15.303964 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:15.303984 | orchestrator | 2026-03-17 00:34:15.304004 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-17 00:34:15.304023 | orchestrator | Tuesday 17 March 2026 00:33:51 +0000 (0:00:16.708) 0:00:41.458 ********* 2026-03-17 00:34:15.304043 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.304062 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.304081 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.304099 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.304117 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.304133 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.304152 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.304204 | orchestrator | 2026-03-17 00:34:15.304221 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-17 00:34:15.304238 | orchestrator | Tuesday 17 March 2026 00:33:52 +0000 (0:00:00.212) 0:00:41.671 ********* 2026-03-17 00:34:15.304254 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.304270 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.304286 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.304303 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.304318 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.304333 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.304349 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.304366 | orchestrator | 2026-03-17 00:34:15.304383 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-17 00:34:15.304399 | orchestrator | Tuesday 17 March 2026 00:33:52 +0000 (0:00:00.199) 0:00:41.870 ********* 2026-03-17 00:34:15.304415 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.304431 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.304449 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.304466 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.304481 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.304496 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.304514 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.304530 | orchestrator | 2026-03-17 00:34:15.304547 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-17 00:34:15.304563 | orchestrator | Tuesday 17 March 2026 00:33:52 +0000 (0:00:00.192) 0:00:42.063 ********* 2026-03-17 00:34:15.304582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:34:15.304601 | orchestrator | 2026-03-17 00:34:15.304617 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-17 00:34:15.304633 | orchestrator | Tuesday 17 March 2026 00:33:52 +0000 (0:00:00.259) 0:00:42.322 ********* 2026-03-17 00:34:15.304650 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.304666 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.304683 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.304700 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.304716 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.304732 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.304748 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.304763 | orchestrator | 2026-03-17 00:34:15.304802 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-17 00:34:15.304860 | orchestrator | Tuesday 17 March 2026 00:33:54 +0000 (0:00:01.539) 0:00:43.862 ********* 2026-03-17 00:34:15.304877 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:15.304894 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:15.304910 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:15.304926 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:15.304942 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:15.304967 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:15.304983 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:15.305000 | orchestrator | 2026-03-17 00:34:15.305016 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-17 00:34:15.305033 | orchestrator | Tuesday 17 March 2026 00:33:55 +0000 (0:00:01.119) 0:00:44.981 ********* 2026-03-17 00:34:15.305049 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.305064 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.305080 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.305095 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.305111 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.305127 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.305143 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.305158 | orchestrator | 2026-03-17 00:34:15.305174 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-17 00:34:15.305204 | orchestrator | Tuesday 17 March 2026 00:33:56 +0000 (0:00:00.954) 0:00:45.936 ********* 2026-03-17 00:34:15.305222 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:34:15.305240 | orchestrator | 2026-03-17 00:34:15.305257 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-17 00:34:15.305274 | orchestrator | Tuesday 17 March 2026 00:33:56 +0000 (0:00:00.272) 0:00:46.208 ********* 2026-03-17 00:34:15.305290 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:15.305305 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:15.305322 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:15.305337 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:15.305353 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:15.305368 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:15.305384 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:15.305399 | orchestrator | 2026-03-17 00:34:15.305443 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-17 00:34:15.305461 | orchestrator | Tuesday 17 March 2026 00:33:57 +0000 (0:00:01.143) 0:00:47.352 ********* 2026-03-17 00:34:15.305478 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:34:15.305494 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:34:15.305511 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:34:15.305527 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:34:15.305543 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:34:15.305562 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:34:15.305577 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:34:15.305593 | orchestrator | 2026-03-17 00:34:15.305611 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-17 00:34:15.305627 | orchestrator | Tuesday 17 March 2026 00:33:57 +0000 (0:00:00.216) 0:00:47.568 ********* 2026-03-17 00:34:15.305645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:34:15.305663 | orchestrator | 2026-03-17 00:34:15.305684 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-17 00:34:15.305702 | orchestrator | Tuesday 17 March 2026 00:33:58 +0000 (0:00:00.299) 0:00:47.868 ********* 2026-03-17 00:34:15.305719 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.305736 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.305753 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.305771 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.305789 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.305832 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.305853 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.305870 | orchestrator | 2026-03-17 00:34:15.305887 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-17 00:34:15.305905 | orchestrator | Tuesday 17 March 2026 00:34:00 +0000 (0:00:01.746) 0:00:49.614 ********* 2026-03-17 00:34:15.305923 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:15.305941 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:15.305959 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:15.305976 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:15.305994 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:15.306010 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:15.306121 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:15.306140 | orchestrator | 2026-03-17 00:34:15.306158 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-17 00:34:15.306175 | orchestrator | Tuesday 17 March 2026 00:34:01 +0000 (0:00:01.242) 0:00:50.856 ********* 2026-03-17 00:34:15.306192 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:34:15.306209 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:34:15.306244 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:34:15.306262 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:34:15.306280 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:34:15.306296 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:34:15.306314 | orchestrator | changed: [testbed-manager] 2026-03-17 00:34:15.306330 | orchestrator | 2026-03-17 00:34:15.306348 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-17 00:34:15.306365 | orchestrator | Tuesday 17 March 2026 00:34:12 +0000 (0:00:11.055) 0:01:01.912 ********* 2026-03-17 00:34:15.306382 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.306400 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.306419 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.306438 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.306457 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.306475 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.306493 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.306512 | orchestrator | 2026-03-17 00:34:15.306531 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-17 00:34:15.306549 | orchestrator | Tuesday 17 March 2026 00:34:13 +0000 (0:00:01.321) 0:01:03.234 ********* 2026-03-17 00:34:15.306566 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.306578 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.306588 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.306599 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.306609 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.306620 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.306641 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.306652 | orchestrator | 2026-03-17 00:34:15.306663 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-17 00:34:15.306673 | orchestrator | Tuesday 17 March 2026 00:34:14 +0000 (0:00:00.896) 0:01:04.131 ********* 2026-03-17 00:34:15.306684 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.306694 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.306705 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.306716 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.306726 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.306736 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.306747 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.306758 | orchestrator | 2026-03-17 00:34:15.306768 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-17 00:34:15.306780 | orchestrator | Tuesday 17 March 2026 00:34:14 +0000 (0:00:00.245) 0:01:04.377 ********* 2026-03-17 00:34:15.306790 | orchestrator | ok: [testbed-manager] 2026-03-17 00:34:15.306801 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:34:15.306989 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:34:15.307010 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:34:15.307021 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:34:15.307032 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:34:15.307042 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:34:15.307053 | orchestrator | 2026-03-17 00:34:15.307065 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-17 00:34:15.307076 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.214) 0:01:04.591 ********* 2026-03-17 00:34:15.307087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:34:15.307100 | orchestrator | 2026-03-17 00:34:15.307126 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-17 00:36:31.367622 | orchestrator | Tuesday 17 March 2026 00:34:15 +0000 (0:00:00.279) 0:01:04.871 ********* 2026-03-17 00:36:31.367814 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:31.367842 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:31.367854 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:31.367865 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:31.367898 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:31.367909 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:31.367920 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:31.367931 | orchestrator | 2026-03-17 00:36:31.367949 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-17 00:36:31.367967 | orchestrator | Tuesday 17 March 2026 00:34:17 +0000 (0:00:02.256) 0:01:07.128 ********* 2026-03-17 00:36:31.367985 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:31.368002 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:31.368013 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:31.368024 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:31.368035 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:31.368045 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:31.368055 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:31.368066 | orchestrator | 2026-03-17 00:36:31.368077 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-17 00:36:31.368090 | orchestrator | Tuesday 17 March 2026 00:34:18 +0000 (0:00:00.762) 0:01:07.890 ********* 2026-03-17 00:36:31.368109 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:31.368129 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:31.368148 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:31.368165 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:31.368183 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:31.368202 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:31.368222 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:31.368241 | orchestrator | 2026-03-17 00:36:31.368260 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-17 00:36:31.368278 | orchestrator | Tuesday 17 March 2026 00:34:18 +0000 (0:00:00.238) 0:01:08.129 ********* 2026-03-17 00:36:31.368291 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:31.368303 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:31.368313 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:31.368324 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:31.368334 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:31.368345 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:31.368356 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:31.368366 | orchestrator | 2026-03-17 00:36:31.368377 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-17 00:36:31.368388 | orchestrator | Tuesday 17 March 2026 00:34:20 +0000 (0:00:01.459) 0:01:09.588 ********* 2026-03-17 00:36:31.368399 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:31.368410 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:31.368420 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:31.368431 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:31.368442 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:31.368452 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:31.368463 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:31.368474 | orchestrator | 2026-03-17 00:36:31.368484 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-17 00:36:31.368495 | orchestrator | Tuesday 17 March 2026 00:34:21 +0000 (0:00:01.686) 0:01:11.275 ********* 2026-03-17 00:36:31.368506 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:31.368517 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:31.368528 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:31.368538 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:31.368549 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:31.368560 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:31.368570 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:31.368581 | orchestrator | 2026-03-17 00:36:31.368595 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-17 00:36:31.368613 | orchestrator | Tuesday 17 March 2026 00:34:24 +0000 (0:00:02.361) 0:01:13.637 ********* 2026-03-17 00:36:31.368655 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:31.368671 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:31.368686 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:31.368716 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:31.368734 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:31.368752 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:31.368771 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:31.368788 | orchestrator | 2026-03-17 00:36:31.368825 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-17 00:36:31.368844 | orchestrator | Tuesday 17 March 2026 00:35:01 +0000 (0:00:37.397) 0:01:51.035 ********* 2026-03-17 00:36:31.368860 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:31.368871 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:31.368882 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:31.368893 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:31.368904 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:31.368914 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:31.368925 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:31.368935 | orchestrator | 2026-03-17 00:36:31.368946 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-17 00:36:31.368957 | orchestrator | Tuesday 17 March 2026 00:36:17 +0000 (0:01:15.797) 0:03:06.833 ********* 2026-03-17 00:36:31.368967 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:31.368978 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:31.368989 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:31.369000 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:31.369010 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:31.369021 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:31.369031 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:31.369042 | orchestrator | 2026-03-17 00:36:31.369053 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-17 00:36:31.369064 | orchestrator | Tuesday 17 March 2026 00:36:19 +0000 (0:00:01.830) 0:03:08.663 ********* 2026-03-17 00:36:31.369075 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:31.369085 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:31.369096 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:31.369107 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:31.369117 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:31.369128 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:31.369138 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:31.369149 | orchestrator | 2026-03-17 00:36:31.369159 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-17 00:36:31.369171 | orchestrator | Tuesday 17 March 2026 00:36:30 +0000 (0:00:11.207) 0:03:19.870 ********* 2026-03-17 00:36:31.369236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-17 00:36:31.369262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-17 00:36:31.369278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-17 00:36:31.369317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-17 00:36:31.369330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-17 00:36:31.369341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-17 00:36:31.369352 | orchestrator | 2026-03-17 00:36:31.369363 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-17 00:36:31.369374 | orchestrator | Tuesday 17 March 2026 00:36:30 +0000 (0:00:00.416) 0:03:20.286 ********* 2026-03-17 00:36:31.369385 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:36:31.369396 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:31.369407 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:36:31.369417 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:31.369428 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:36:31.369439 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:31.369450 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-17 00:36:31.369460 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:31.369471 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:36:31.369482 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:36:31.369493 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 00:36:31.369503 | orchestrator | 2026-03-17 00:36:31.369521 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-17 00:36:31.369532 | orchestrator | Tuesday 17 March 2026 00:36:31 +0000 (0:00:00.598) 0:03:20.885 ********* 2026-03-17 00:36:31.369543 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:36:31.369555 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:36:31.369566 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:36:31.369577 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:36:31.369587 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:36:31.369605 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:36:37.239266 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:36:37.239374 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:36:37.239390 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:36:37.239401 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:36:37.239413 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:37.239426 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:36:37.239462 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:36:37.239474 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:36:37.239485 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:36:37.239496 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:36:37.239507 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:36:37.239518 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:36:37.239529 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:36:37.239540 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:36:37.239550 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:36:37.239561 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:36:37.239572 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:36:37.239583 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:37.239593 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:36:37.239604 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:36:37.239615 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:36:37.239655 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-17 00:36:37.239666 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:36:37.239677 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-17 00:36:37.239688 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:36:37.239698 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:36:37.239709 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-17 00:36:37.239720 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:36:37.239744 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-17 00:36:37.239756 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:36:37.239767 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-17 00:36:37.239778 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:37.239788 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-17 00:36:37.239799 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-17 00:36:37.239811 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-17 00:36:37.239823 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-17 00:36:37.239835 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-17 00:36:37.239847 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:37.239861 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-17 00:36:37.239881 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-17 00:36:37.239894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-17 00:36:37.239906 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-17 00:36:37.239918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-17 00:36:37.239948 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-17 00:36:37.239961 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-17 00:36:37.239973 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-17 00:36:37.239986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-17 00:36:37.240012 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-17 00:36:37.240035 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-17 00:36:37.240047 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-17 00:36:37.240059 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-17 00:36:37.240072 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-17 00:36:37.240084 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-17 00:36:37.240096 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-17 00:36:37.240108 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-17 00:36:37.240120 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-17 00:36:37.240132 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-17 00:36:37.240144 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-17 00:36:37.240157 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-17 00:36:37.240168 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-17 00:36:37.240178 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-17 00:36:37.240189 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-17 00:36:37.240200 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-17 00:36:37.240210 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-17 00:36:37.240221 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-17 00:36:37.240232 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-17 00:36:37.240242 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-17 00:36:37.240253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-17 00:36:37.240264 | orchestrator | 2026-03-17 00:36:37.240275 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-17 00:36:37.240286 | orchestrator | Tuesday 17 March 2026 00:36:36 +0000 (0:00:04.736) 0:03:25.621 ********* 2026-03-17 00:36:37.240297 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:36:37.240315 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:36:37.240331 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:36:37.240342 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:36:37.240353 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:36:37.240364 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:36:37.240375 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-17 00:36:37.240385 | orchestrator | 2026-03-17 00:36:37.240396 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-17 00:36:37.240407 | orchestrator | Tuesday 17 March 2026 00:36:36 +0000 (0:00:00.662) 0:03:26.284 ********* 2026-03-17 00:36:37.240417 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:36:37.240428 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:37.240439 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:36:37.240450 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:37.240461 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:36:37.240471 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:36:37.240482 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:37.240493 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:37.240503 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:36:37.240514 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:36:37.240532 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:36:50.253153 | orchestrator | 2026-03-17 00:36:50.253269 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-17 00:36:50.253300 | orchestrator | Tuesday 17 March 2026 00:36:37 +0000 (0:00:00.555) 0:03:26.839 ********* 2026-03-17 00:36:50.253319 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:36:50.253330 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:50.253352 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:36:50.253363 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:36:50.253372 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:50.253382 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:50.253392 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-17 00:36:50.253402 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:50.253411 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:36:50.253421 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:36:50.253430 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-17 00:36:50.253440 | orchestrator | 2026-03-17 00:36:50.253449 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-17 00:36:50.253458 | orchestrator | Tuesday 17 March 2026 00:36:37 +0000 (0:00:00.510) 0:03:27.349 ********* 2026-03-17 00:36:50.253467 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:36:50.253476 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:50.253514 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:36:50.253525 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:36:50.253535 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:50.253544 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:50.253553 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-17 00:36:50.253563 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:50.253573 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-17 00:36:50.253583 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-17 00:36:50.253593 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-17 00:36:50.253602 | orchestrator | 2026-03-17 00:36:50.253643 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-17 00:36:50.253654 | orchestrator | Tuesday 17 March 2026 00:36:39 +0000 (0:00:01.591) 0:03:28.941 ********* 2026-03-17 00:36:50.253663 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:50.253673 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:50.253682 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:50.253693 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:50.253703 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:50.253713 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:50.253723 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:50.253732 | orchestrator | 2026-03-17 00:36:50.253743 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-17 00:36:50.253753 | orchestrator | Tuesday 17 March 2026 00:36:39 +0000 (0:00:00.269) 0:03:29.210 ********* 2026-03-17 00:36:50.253763 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:50.253787 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:50.253797 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:50.253807 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:50.253818 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:50.253828 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:50.253838 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:50.253847 | orchestrator | 2026-03-17 00:36:50.253856 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-17 00:36:50.253865 | orchestrator | Tuesday 17 March 2026 00:36:44 +0000 (0:00:05.178) 0:03:34.389 ********* 2026-03-17 00:36:50.253874 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-17 00:36:50.253886 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-17 00:36:50.253896 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:50.253906 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-17 00:36:50.253916 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:50.253926 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-17 00:36:50.253936 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:50.253945 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:50.253955 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-17 00:36:50.253965 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-17 00:36:50.253976 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:50.253986 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:50.253996 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-17 00:36:50.254007 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:50.254074 | orchestrator | 2026-03-17 00:36:50.254085 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-17 00:36:50.254095 | orchestrator | Tuesday 17 March 2026 00:36:45 +0000 (0:00:00.262) 0:03:34.651 ********* 2026-03-17 00:36:50.254105 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-17 00:36:50.254116 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-17 00:36:50.254138 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-17 00:36:50.254183 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-17 00:36:50.254194 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-17 00:36:50.254203 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-17 00:36:50.254213 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-17 00:36:50.254222 | orchestrator | 2026-03-17 00:36:50.254232 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-17 00:36:50.254241 | orchestrator | Tuesday 17 March 2026 00:36:46 +0000 (0:00:01.134) 0:03:35.786 ********* 2026-03-17 00:36:50.254291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:36:50.254306 | orchestrator | 2026-03-17 00:36:50.254315 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-17 00:36:50.254324 | orchestrator | Tuesday 17 March 2026 00:36:46 +0000 (0:00:00.366) 0:03:36.152 ********* 2026-03-17 00:36:50.254333 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:50.254342 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:50.254351 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:50.254360 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:50.254370 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:50.254379 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:50.254388 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:50.254398 | orchestrator | 2026-03-17 00:36:50.254408 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-17 00:36:50.254417 | orchestrator | Tuesday 17 March 2026 00:36:47 +0000 (0:00:01.217) 0:03:37.370 ********* 2026-03-17 00:36:50.254426 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:50.254434 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:50.254442 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:50.254451 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:50.254461 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:50.254470 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:50.254479 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:50.254489 | orchestrator | 2026-03-17 00:36:50.254499 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-17 00:36:50.254509 | orchestrator | Tuesday 17 March 2026 00:36:48 +0000 (0:00:00.604) 0:03:37.974 ********* 2026-03-17 00:36:50.254519 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:50.254528 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:50.254538 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:50.254548 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:50.254557 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:50.254567 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:50.254577 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:50.254586 | orchestrator | 2026-03-17 00:36:50.254636 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-17 00:36:50.254649 | orchestrator | Tuesday 17 March 2026 00:36:49 +0000 (0:00:00.739) 0:03:38.713 ********* 2026-03-17 00:36:50.254659 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:50.254670 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:50.254681 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:50.254692 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:50.254702 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:50.254713 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:50.254723 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:50.254733 | orchestrator | 2026-03-17 00:36:50.254743 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-17 00:36:50.254753 | orchestrator | Tuesday 17 March 2026 00:36:49 +0000 (0:00:00.562) 0:03:39.276 ********* 2026-03-17 00:36:50.254772 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705902.4493082, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:50.254796 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705921.533958, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:50.254808 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705925.8539999, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:50.254863 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705929.0510361, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555177 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705948.3895905, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555271 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705935.14414, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555284 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773705936.198063, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555308 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555336 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555345 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555353 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555423 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555434 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555442 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-17 00:36:55.555451 | orchestrator | 2026-03-17 00:36:55.555461 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-17 00:36:55.555477 | orchestrator | Tuesday 17 March 2026 00:36:50 +0000 (0:00:00.935) 0:03:40.212 ********* 2026-03-17 00:36:55.555486 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:55.555495 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:55.555503 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:55.555511 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:55.555519 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:55.555527 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:55.555535 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:55.555543 | orchestrator | 2026-03-17 00:36:55.555551 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-17 00:36:55.555559 | orchestrator | Tuesday 17 March 2026 00:36:51 +0000 (0:00:01.149) 0:03:41.362 ********* 2026-03-17 00:36:55.555567 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:55.555575 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:55.555586 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:55.555595 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:55.555677 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:55.555686 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:55.555694 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:55.555702 | orchestrator | 2026-03-17 00:36:55.555710 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-17 00:36:55.555718 | orchestrator | Tuesday 17 March 2026 00:36:52 +0000 (0:00:01.124) 0:03:42.486 ********* 2026-03-17 00:36:55.555727 | orchestrator | changed: [testbed-manager] 2026-03-17 00:36:55.555736 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:36:55.555745 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:36:55.555754 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:36:55.555763 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:36:55.555771 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:36:55.555780 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:36:55.555789 | orchestrator | 2026-03-17 00:36:55.555798 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-17 00:36:55.555807 | orchestrator | Tuesday 17 March 2026 00:36:54 +0000 (0:00:01.159) 0:03:43.645 ********* 2026-03-17 00:36:55.555816 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:36:55.555825 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:36:55.555834 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:36:55.555842 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:36:55.555851 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:36:55.555860 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:36:55.555869 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:36:55.555877 | orchestrator | 2026-03-17 00:36:55.555886 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-17 00:36:55.555895 | orchestrator | Tuesday 17 March 2026 00:36:54 +0000 (0:00:00.332) 0:03:43.978 ********* 2026-03-17 00:36:55.555904 | orchestrator | ok: [testbed-manager] 2026-03-17 00:36:55.555914 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:36:55.555923 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:36:55.555931 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:36:55.555940 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:36:55.555950 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:36:55.555958 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:36:55.555966 | orchestrator | 2026-03-17 00:36:55.555975 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-17 00:36:55.555984 | orchestrator | Tuesday 17 March 2026 00:36:55 +0000 (0:00:00.750) 0:03:44.728 ********* 2026-03-17 00:36:55.555995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:36:55.556006 | orchestrator | 2026-03-17 00:36:55.556016 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-17 00:36:55.556031 | orchestrator | Tuesday 17 March 2026 00:36:55 +0000 (0:00:00.396) 0:03:45.124 ********* 2026-03-17 00:38:12.491820 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:12.491923 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:12.491940 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:12.491951 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:12.491962 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:12.491974 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:12.491985 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:12.491997 | orchestrator | 2026-03-17 00:38:12.492009 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-17 00:38:12.492021 | orchestrator | Tuesday 17 March 2026 00:37:04 +0000 (0:00:08.548) 0:03:53.672 ********* 2026-03-17 00:38:12.492032 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:12.492043 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:12.492054 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:12.492065 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:12.492075 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:12.492086 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:12.492097 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:12.492107 | orchestrator | 2026-03-17 00:38:12.492118 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-17 00:38:12.492129 | orchestrator | Tuesday 17 March 2026 00:37:05 +0000 (0:00:01.300) 0:03:54.973 ********* 2026-03-17 00:38:12.492140 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:12.492151 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:12.492162 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:12.492172 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:12.492183 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:12.492194 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:12.492204 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:12.492215 | orchestrator | 2026-03-17 00:38:12.492226 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-17 00:38:12.492237 | orchestrator | Tuesday 17 March 2026 00:37:06 +0000 (0:00:00.971) 0:03:55.945 ********* 2026-03-17 00:38:12.492248 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:12.492259 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:12.492269 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:12.492280 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:12.492291 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:12.492301 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:12.492312 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:12.492322 | orchestrator | 2026-03-17 00:38:12.492334 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-17 00:38:12.492345 | orchestrator | Tuesday 17 March 2026 00:37:06 +0000 (0:00:00.254) 0:03:56.199 ********* 2026-03-17 00:38:12.492356 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:12.492369 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:12.492382 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:12.492394 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:12.492406 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:12.492419 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:12.492431 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:12.492443 | orchestrator | 2026-03-17 00:38:12.492456 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-17 00:38:12.492491 | orchestrator | Tuesday 17 March 2026 00:37:06 +0000 (0:00:00.284) 0:03:56.484 ********* 2026-03-17 00:38:12.492504 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:12.492517 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:12.492529 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:12.492542 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:12.492554 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:12.492566 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:12.492578 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:12.492591 | orchestrator | 2026-03-17 00:38:12.492604 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-17 00:38:12.492638 | orchestrator | Tuesday 17 March 2026 00:37:07 +0000 (0:00:00.278) 0:03:56.763 ********* 2026-03-17 00:38:12.492651 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:12.492663 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:12.492675 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:12.492687 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:12.492700 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:12.492712 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:12.492724 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:12.492734 | orchestrator | 2026-03-17 00:38:12.492745 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-17 00:38:12.492756 | orchestrator | Tuesday 17 March 2026 00:37:12 +0000 (0:00:05.480) 0:04:02.244 ********* 2026-03-17 00:38:12.492768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:12.492782 | orchestrator | 2026-03-17 00:38:12.492793 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-17 00:38:12.492804 | orchestrator | Tuesday 17 March 2026 00:37:13 +0000 (0:00:00.370) 0:04:02.614 ********* 2026-03-17 00:38:12.492815 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-17 00:38:12.492826 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-17 00:38:12.492837 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-17 00:38:12.492848 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:12.492859 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-17 00:38:12.492869 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:12.492880 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-17 00:38:12.492891 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-17 00:38:12.492902 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-17 00:38:12.492913 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-17 00:38:12.492923 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:12.492934 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:12.492945 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-17 00:38:12.492956 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-17 00:38:12.492967 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-17 00:38:12.492978 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:12.493005 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-17 00:38:12.493017 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:12.493028 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-17 00:38:12.493039 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-17 00:38:12.493050 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:12.493061 | orchestrator | 2026-03-17 00:38:12.493072 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-17 00:38:12.493082 | orchestrator | Tuesday 17 March 2026 00:37:13 +0000 (0:00:00.323) 0:04:02.938 ********* 2026-03-17 00:38:12.493094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:12.493105 | orchestrator | 2026-03-17 00:38:12.493116 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-17 00:38:12.493127 | orchestrator | Tuesday 17 March 2026 00:37:13 +0000 (0:00:00.469) 0:04:03.407 ********* 2026-03-17 00:38:12.493138 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-17 00:38:12.493148 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-17 00:38:12.493159 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:12.493233 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-17 00:38:12.493246 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:12.493257 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-17 00:38:12.493268 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:12.493278 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-17 00:38:12.493289 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:12.493299 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-17 00:38:12.493310 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:12.493321 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:12.493331 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-17 00:38:12.493342 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:12.493353 | orchestrator | 2026-03-17 00:38:12.493363 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-17 00:38:12.493391 | orchestrator | Tuesday 17 March 2026 00:37:14 +0000 (0:00:00.291) 0:04:03.699 ********* 2026-03-17 00:38:12.493403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:12.493414 | orchestrator | 2026-03-17 00:38:12.493425 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-17 00:38:12.493440 | orchestrator | Tuesday 17 March 2026 00:37:14 +0000 (0:00:00.383) 0:04:04.082 ********* 2026-03-17 00:38:12.493451 | orchestrator | changed: [testbed-manager] 2026-03-17 00:38:12.493462 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:12.493491 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:12.493502 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:12.493512 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:12.493523 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:12.493533 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:12.493544 | orchestrator | 2026-03-17 00:38:12.493555 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-17 00:38:12.493566 | orchestrator | Tuesday 17 March 2026 00:37:49 +0000 (0:00:34.742) 0:04:38.824 ********* 2026-03-17 00:38:12.493576 | orchestrator | changed: [testbed-manager] 2026-03-17 00:38:12.493587 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:12.493598 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:12.493608 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:12.493619 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:12.493629 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:12.493640 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:12.493650 | orchestrator | 2026-03-17 00:38:12.493661 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-17 00:38:12.493672 | orchestrator | Tuesday 17 March 2026 00:37:57 +0000 (0:00:08.233) 0:04:47.058 ********* 2026-03-17 00:38:12.493683 | orchestrator | changed: [testbed-manager] 2026-03-17 00:38:12.493694 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:12.493704 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:12.493715 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:12.493725 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:12.493736 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:12.493747 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:12.493757 | orchestrator | 2026-03-17 00:38:12.493768 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-17 00:38:12.493779 | orchestrator | Tuesday 17 March 2026 00:38:05 +0000 (0:00:07.688) 0:04:54.746 ********* 2026-03-17 00:38:12.493790 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:12.493800 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:12.493811 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:12.493822 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:12.493840 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:12.493851 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:12.493861 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:12.493872 | orchestrator | 2026-03-17 00:38:12.493883 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-17 00:38:12.493894 | orchestrator | Tuesday 17 March 2026 00:38:06 +0000 (0:00:01.664) 0:04:56.411 ********* 2026-03-17 00:38:12.493904 | orchestrator | changed: [testbed-manager] 2026-03-17 00:38:12.493915 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:12.493926 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:12.493937 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:12.493948 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:12.493959 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:12.493969 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:12.493980 | orchestrator | 2026-03-17 00:38:12.493997 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-17 00:38:22.892577 | orchestrator | Tuesday 17 March 2026 00:38:12 +0000 (0:00:05.649) 0:05:02.061 ********* 2026-03-17 00:38:22.892683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:22.892701 | orchestrator | 2026-03-17 00:38:22.892714 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-17 00:38:22.892725 | orchestrator | Tuesday 17 March 2026 00:38:12 +0000 (0:00:00.331) 0:05:02.393 ********* 2026-03-17 00:38:22.892736 | orchestrator | changed: [testbed-manager] 2026-03-17 00:38:22.892748 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:22.892759 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:22.892770 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:22.892780 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:22.892791 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:22.892802 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:22.892813 | orchestrator | 2026-03-17 00:38:22.892824 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-17 00:38:22.892835 | orchestrator | Tuesday 17 March 2026 00:38:13 +0000 (0:00:00.632) 0:05:03.025 ********* 2026-03-17 00:38:22.892846 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:22.892858 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:22.892868 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:22.892879 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:22.892890 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:22.892901 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:22.892912 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:22.892923 | orchestrator | 2026-03-17 00:38:22.892934 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-17 00:38:22.892945 | orchestrator | Tuesday 17 March 2026 00:38:15 +0000 (0:00:01.775) 0:05:04.801 ********* 2026-03-17 00:38:22.892955 | orchestrator | changed: [testbed-manager] 2026-03-17 00:38:22.892966 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:38:22.892977 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:38:22.892988 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:38:22.892999 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:38:22.893009 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:38:22.893020 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:38:22.893031 | orchestrator | 2026-03-17 00:38:22.893041 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-17 00:38:22.893052 | orchestrator | Tuesday 17 March 2026 00:38:15 +0000 (0:00:00.758) 0:05:05.560 ********* 2026-03-17 00:38:22.893063 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:22.893074 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:22.893087 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:22.893099 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:22.893111 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:22.893144 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:22.893156 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:22.893167 | orchestrator | 2026-03-17 00:38:22.893178 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-17 00:38:22.893204 | orchestrator | Tuesday 17 March 2026 00:38:16 +0000 (0:00:00.225) 0:05:05.786 ********* 2026-03-17 00:38:22.893216 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:22.893227 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:22.893238 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:22.893249 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:22.893259 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:22.893270 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:22.893281 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:22.893292 | orchestrator | 2026-03-17 00:38:22.893303 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-17 00:38:22.893315 | orchestrator | Tuesday 17 March 2026 00:38:16 +0000 (0:00:00.341) 0:05:06.128 ********* 2026-03-17 00:38:22.893326 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:22.893337 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:22.893348 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:22.893359 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:22.893370 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:22.893381 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:22.893392 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:22.893403 | orchestrator | 2026-03-17 00:38:22.893414 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-17 00:38:22.893425 | orchestrator | Tuesday 17 March 2026 00:38:16 +0000 (0:00:00.344) 0:05:06.472 ********* 2026-03-17 00:38:22.893436 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:22.893447 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:22.893556 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:22.893568 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:22.893578 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:22.893589 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:22.893600 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:22.893610 | orchestrator | 2026-03-17 00:38:22.893621 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-17 00:38:22.893634 | orchestrator | Tuesday 17 March 2026 00:38:17 +0000 (0:00:00.219) 0:05:06.691 ********* 2026-03-17 00:38:22.893645 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:22.893656 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:22.893667 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:22.893677 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:22.893688 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:22.893699 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:22.893709 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:22.893720 | orchestrator | 2026-03-17 00:38:22.893731 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-17 00:38:22.893742 | orchestrator | Tuesday 17 March 2026 00:38:17 +0000 (0:00:00.262) 0:05:06.953 ********* 2026-03-17 00:38:22.893753 | orchestrator | ok: [testbed-manager] =>  2026-03-17 00:38:22.893763 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:38:22.893774 | orchestrator | ok: [testbed-node-0] =>  2026-03-17 00:38:22.893785 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:38:22.893796 | orchestrator | ok: [testbed-node-1] =>  2026-03-17 00:38:22.893807 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:38:22.893817 | orchestrator | ok: [testbed-node-2] =>  2026-03-17 00:38:22.893828 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:38:22.893857 | orchestrator | ok: [testbed-node-3] =>  2026-03-17 00:38:22.893869 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:38:22.893880 | orchestrator | ok: [testbed-node-4] =>  2026-03-17 00:38:22.893891 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:38:22.893901 | orchestrator | ok: [testbed-node-5] =>  2026-03-17 00:38:22.893912 | orchestrator |  docker_version: 5:27.5.1 2026-03-17 00:38:22.893931 | orchestrator | 2026-03-17 00:38:22.893943 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-17 00:38:22.893953 | orchestrator | Tuesday 17 March 2026 00:38:17 +0000 (0:00:00.252) 0:05:07.206 ********* 2026-03-17 00:38:22.893964 | orchestrator | ok: [testbed-manager] =>  2026-03-17 00:38:22.893975 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:38:22.893986 | orchestrator | ok: [testbed-node-0] =>  2026-03-17 00:38:22.893997 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:38:22.894007 | orchestrator | ok: [testbed-node-1] =>  2026-03-17 00:38:22.894080 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:38:22.894095 | orchestrator | ok: [testbed-node-2] =>  2026-03-17 00:38:22.894105 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:38:22.894116 | orchestrator | ok: [testbed-node-3] =>  2026-03-17 00:38:22.894127 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:38:22.894138 | orchestrator | ok: [testbed-node-4] =>  2026-03-17 00:38:22.894148 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:38:22.894171 | orchestrator | ok: [testbed-node-5] =>  2026-03-17 00:38:22.894182 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-17 00:38:22.894193 | orchestrator | 2026-03-17 00:38:22.894203 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-17 00:38:22.894214 | orchestrator | Tuesday 17 March 2026 00:38:17 +0000 (0:00:00.288) 0:05:07.494 ********* 2026-03-17 00:38:22.894225 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:22.894236 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:22.894247 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:22.894258 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:22.894268 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:22.894279 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:22.894290 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:22.894301 | orchestrator | 2026-03-17 00:38:22.894316 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-17 00:38:22.894334 | orchestrator | Tuesday 17 March 2026 00:38:18 +0000 (0:00:00.269) 0:05:07.764 ********* 2026-03-17 00:38:22.894359 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:22.894382 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:22.894399 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:22.894417 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:38:22.894435 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:38:22.894478 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:38:22.894496 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:38:22.894513 | orchestrator | 2026-03-17 00:38:22.894530 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-17 00:38:22.894547 | orchestrator | Tuesday 17 March 2026 00:38:18 +0000 (0:00:00.257) 0:05:08.022 ********* 2026-03-17 00:38:22.894577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:38:22.894599 | orchestrator | 2026-03-17 00:38:22.894617 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-17 00:38:22.894635 | orchestrator | Tuesday 17 March 2026 00:38:18 +0000 (0:00:00.400) 0:05:08.422 ********* 2026-03-17 00:38:22.894653 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:22.894672 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:22.894690 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:22.894708 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:22.894728 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:22.894746 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:22.894766 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:22.894783 | orchestrator | 2026-03-17 00:38:22.894800 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-17 00:38:22.894819 | orchestrator | Tuesday 17 March 2026 00:38:19 +0000 (0:00:00.894) 0:05:09.317 ********* 2026-03-17 00:38:22.894853 | orchestrator | ok: [testbed-manager] 2026-03-17 00:38:22.894870 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:38:22.894888 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:38:22.894901 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:38:22.894911 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:38:22.894922 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:38:22.894933 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:38:22.894943 | orchestrator | 2026-03-17 00:38:22.894954 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-17 00:38:22.894965 | orchestrator | Tuesday 17 March 2026 00:38:22 +0000 (0:00:02.831) 0:05:12.148 ********* 2026-03-17 00:38:22.894976 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-17 00:38:22.894987 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-17 00:38:22.894998 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-17 00:38:22.895008 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-17 00:38:22.895019 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-17 00:38:22.895030 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:38:22.895041 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-17 00:38:22.895051 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-17 00:38:22.895062 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-17 00:38:22.895073 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-17 00:38:22.895083 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:38:22.895094 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-17 00:38:22.895105 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-17 00:38:22.895115 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-17 00:38:22.895126 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:38:22.895137 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-17 00:38:22.895161 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-17 00:39:21.627411 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:39:21.627528 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-17 00:39:21.627545 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-17 00:39:21.627557 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-17 00:39:21.627568 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-17 00:39:21.627579 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:39:21.627590 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:39:21.627601 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-17 00:39:21.627612 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-17 00:39:21.627623 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-17 00:39:21.627634 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:39:21.627645 | orchestrator | 2026-03-17 00:39:21.627657 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-17 00:39:21.627669 | orchestrator | Tuesday 17 March 2026 00:38:23 +0000 (0:00:00.530) 0:05:12.679 ********* 2026-03-17 00:39:21.627680 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:21.627691 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.627702 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.627713 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.627724 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.627734 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.627745 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.627756 | orchestrator | 2026-03-17 00:39:21.627767 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-17 00:39:21.627778 | orchestrator | Tuesday 17 March 2026 00:38:29 +0000 (0:00:06.463) 0:05:19.142 ********* 2026-03-17 00:39:21.627788 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.627822 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:21.627833 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.627847 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.627866 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.627884 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.627904 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.627924 | orchestrator | 2026-03-17 00:39:21.627943 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-17 00:39:21.627961 | orchestrator | Tuesday 17 March 2026 00:38:30 +0000 (0:00:01.032) 0:05:20.175 ********* 2026-03-17 00:39:21.627974 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:21.627986 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.627999 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.628011 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.628023 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.628035 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.628047 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.628060 | orchestrator | 2026-03-17 00:39:21.628071 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-17 00:39:21.628084 | orchestrator | Tuesday 17 March 2026 00:38:38 +0000 (0:00:07.985) 0:05:28.160 ********* 2026-03-17 00:39:21.628096 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:21.628125 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.628137 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.628151 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.628171 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.628191 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.628211 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.628231 | orchestrator | 2026-03-17 00:39:21.628250 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-17 00:39:21.628268 | orchestrator | Tuesday 17 March 2026 00:38:41 +0000 (0:00:03.321) 0:05:31.481 ********* 2026-03-17 00:39:21.628279 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:21.628289 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.628300 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.628311 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.628322 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.628332 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.628343 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.628354 | orchestrator | 2026-03-17 00:39:21.628365 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-17 00:39:21.628376 | orchestrator | Tuesday 17 March 2026 00:38:43 +0000 (0:00:01.210) 0:05:32.692 ********* 2026-03-17 00:39:21.628433 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:21.628444 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.628454 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.628465 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.628476 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.628486 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.628497 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.628508 | orchestrator | 2026-03-17 00:39:21.628519 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-17 00:39:21.628530 | orchestrator | Tuesday 17 March 2026 00:38:44 +0000 (0:00:01.260) 0:05:33.952 ********* 2026-03-17 00:39:21.628541 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:39:21.628552 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:39:21.628563 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:39:21.628574 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:39:21.628584 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:39:21.628595 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:39:21.628605 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:21.628616 | orchestrator | 2026-03-17 00:39:21.628627 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-17 00:39:21.628648 | orchestrator | Tuesday 17 March 2026 00:38:44 +0000 (0:00:00.516) 0:05:34.469 ********* 2026-03-17 00:39:21.628659 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:21.628670 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.628681 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.628691 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.628702 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.628713 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.628723 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.628734 | orchestrator | 2026-03-17 00:39:21.628745 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-17 00:39:21.628789 | orchestrator | Tuesday 17 March 2026 00:38:54 +0000 (0:00:09.599) 0:05:44.068 ********* 2026-03-17 00:39:21.628810 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:21.628826 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.628837 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.628848 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.628858 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.628869 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.628880 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.628890 | orchestrator | 2026-03-17 00:39:21.628901 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-17 00:39:21.628912 | orchestrator | Tuesday 17 March 2026 00:38:55 +0000 (0:00:01.003) 0:05:45.071 ********* 2026-03-17 00:39:21.628923 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:21.628933 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.628944 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.628955 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.628965 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.628976 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.628987 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.628998 | orchestrator | 2026-03-17 00:39:21.629016 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-17 00:39:21.629037 | orchestrator | Tuesday 17 March 2026 00:39:04 +0000 (0:00:08.984) 0:05:54.055 ********* 2026-03-17 00:39:21.629063 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:21.629081 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.629096 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.629111 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.629127 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.629143 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.629159 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.629178 | orchestrator | 2026-03-17 00:39:21.629198 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-17 00:39:21.629216 | orchestrator | Tuesday 17 March 2026 00:39:15 +0000 (0:00:11.050) 0:06:05.106 ********* 2026-03-17 00:39:21.629235 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-17 00:39:21.629248 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-17 00:39:21.629258 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-17 00:39:21.629269 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-17 00:39:21.629280 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-17 00:39:21.629291 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-17 00:39:21.629301 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-17 00:39:21.629312 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-17 00:39:21.629322 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-17 00:39:21.629333 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-17 00:39:21.629344 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-17 00:39:21.629354 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-17 00:39:21.629365 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-17 00:39:21.629376 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-17 00:39:21.629433 | orchestrator | 2026-03-17 00:39:21.629444 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-17 00:39:21.629455 | orchestrator | Tuesday 17 March 2026 00:39:16 +0000 (0:00:01.169) 0:06:06.275 ********* 2026-03-17 00:39:21.629466 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:39:21.629477 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:39:21.629487 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:39:21.629498 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:39:21.629509 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:39:21.629520 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:39:21.629531 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:39:21.629542 | orchestrator | 2026-03-17 00:39:21.629553 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-17 00:39:21.629564 | orchestrator | Tuesday 17 March 2026 00:39:17 +0000 (0:00:00.526) 0:06:06.802 ********* 2026-03-17 00:39:21.629575 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:21.629586 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:21.629597 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:21.629608 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:21.629619 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:21.629629 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:21.629640 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:21.629651 | orchestrator | 2026-03-17 00:39:21.629662 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-17 00:39:21.629695 | orchestrator | Tuesday 17 March 2026 00:39:20 +0000 (0:00:03.668) 0:06:10.471 ********* 2026-03-17 00:39:21.629706 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:39:21.629717 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:39:21.629728 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:39:21.629739 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:39:21.629749 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:39:21.629760 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:39:21.629771 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:39:21.629782 | orchestrator | 2026-03-17 00:39:21.629793 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-17 00:39:21.629804 | orchestrator | Tuesday 17 March 2026 00:39:21 +0000 (0:00:00.470) 0:06:10.941 ********* 2026-03-17 00:39:21.629815 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-17 00:39:21.629826 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-17 00:39:21.629837 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:39:21.629848 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-17 00:39:21.629859 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-17 00:39:21.629870 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:39:21.629881 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-17 00:39:21.629892 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-17 00:39:21.629903 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:39:21.629963 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-17 00:39:40.193271 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-17 00:39:40.193471 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:39:40.193490 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-17 00:39:40.193502 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-17 00:39:40.193513 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:39:40.193525 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-17 00:39:40.193536 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-17 00:39:40.193547 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:39:40.193558 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-17 00:39:40.193594 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-17 00:39:40.193606 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:39:40.193618 | orchestrator | 2026-03-17 00:39:40.193631 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-17 00:39:40.193643 | orchestrator | Tuesday 17 March 2026 00:39:21 +0000 (0:00:00.538) 0:06:11.480 ********* 2026-03-17 00:39:40.193654 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:39:40.193665 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:39:40.193676 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:39:40.193687 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:39:40.193698 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:39:40.193710 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:39:40.193721 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:39:40.193732 | orchestrator | 2026-03-17 00:39:40.193743 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-17 00:39:40.193755 | orchestrator | Tuesday 17 March 2026 00:39:22 +0000 (0:00:00.473) 0:06:11.954 ********* 2026-03-17 00:39:40.193766 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:39:40.193777 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:39:40.193788 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:39:40.193799 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:39:40.193810 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:39:40.193823 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:39:40.193836 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:39:40.193848 | orchestrator | 2026-03-17 00:39:40.193861 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-17 00:39:40.193874 | orchestrator | Tuesday 17 March 2026 00:39:22 +0000 (0:00:00.617) 0:06:12.571 ********* 2026-03-17 00:39:40.193886 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:39:40.193899 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:39:40.193912 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:39:40.193925 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:39:40.193937 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:39:40.193955 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:39:40.193973 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:39:40.193991 | orchestrator | 2026-03-17 00:39:40.194009 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-17 00:39:40.194120 | orchestrator | Tuesday 17 March 2026 00:39:23 +0000 (0:00:00.491) 0:06:13.062 ********* 2026-03-17 00:39:40.194142 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:40.194164 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:39:40.194184 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:39:40.194204 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:39:40.194215 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:39:40.194226 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:39:40.194237 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:39:40.194248 | orchestrator | 2026-03-17 00:39:40.194259 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-17 00:39:40.194270 | orchestrator | Tuesday 17 March 2026 00:39:25 +0000 (0:00:01.731) 0:06:14.794 ********* 2026-03-17 00:39:40.194282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:39:40.194296 | orchestrator | 2026-03-17 00:39:40.194307 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-17 00:39:40.194318 | orchestrator | Tuesday 17 March 2026 00:39:26 +0000 (0:00:00.817) 0:06:15.611 ********* 2026-03-17 00:39:40.194329 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:40.194340 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:40.194373 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:40.194385 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:40.194396 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:40.194430 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:40.194442 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:40.194453 | orchestrator | 2026-03-17 00:39:40.194464 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-17 00:39:40.194475 | orchestrator | Tuesday 17 March 2026 00:39:27 +0000 (0:00:00.988) 0:06:16.600 ********* 2026-03-17 00:39:40.194486 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:40.194497 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:40.194508 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:40.194518 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:40.194529 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:40.194540 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:40.194551 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:40.194561 | orchestrator | 2026-03-17 00:39:40.194572 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-17 00:39:40.194583 | orchestrator | Tuesday 17 March 2026 00:39:27 +0000 (0:00:00.807) 0:06:17.408 ********* 2026-03-17 00:39:40.194594 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:40.194605 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:40.194616 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:40.194627 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:40.194637 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:40.194648 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:40.194659 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:40.194669 | orchestrator | 2026-03-17 00:39:40.194680 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-17 00:39:40.194714 | orchestrator | Tuesday 17 March 2026 00:39:29 +0000 (0:00:01.442) 0:06:18.851 ********* 2026-03-17 00:39:40.194726 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:39:40.194736 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:39:40.194747 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:39:40.194758 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:39:40.194769 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:39:40.194780 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:39:40.194790 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:39:40.194801 | orchestrator | 2026-03-17 00:39:40.194812 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-17 00:39:40.194823 | orchestrator | Tuesday 17 March 2026 00:39:30 +0000 (0:00:01.362) 0:06:20.214 ********* 2026-03-17 00:39:40.194834 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:40.194845 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:40.194856 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:40.194867 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:40.194956 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:40.194967 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:40.194978 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:40.194989 | orchestrator | 2026-03-17 00:39:40.195000 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-17 00:39:40.195011 | orchestrator | Tuesday 17 March 2026 00:39:31 +0000 (0:00:01.291) 0:06:21.505 ********* 2026-03-17 00:39:40.195022 | orchestrator | changed: [testbed-manager] 2026-03-17 00:39:40.195033 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:39:40.195043 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:39:40.195054 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:39:40.195065 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:39:40.195076 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:39:40.195125 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:39:40.195137 | orchestrator | 2026-03-17 00:39:40.195148 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-17 00:39:40.195159 | orchestrator | Tuesday 17 March 2026 00:39:33 +0000 (0:00:01.497) 0:06:23.002 ********* 2026-03-17 00:39:40.195171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:39:40.195199 | orchestrator | 2026-03-17 00:39:40.195210 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-17 00:39:40.195221 | orchestrator | Tuesday 17 March 2026 00:39:34 +0000 (0:00:00.797) 0:06:23.800 ********* 2026-03-17 00:39:40.195232 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:40.195243 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:39:40.195254 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:39:40.195265 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:39:40.195276 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:39:40.195286 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:39:40.195297 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:39:40.195308 | orchestrator | 2026-03-17 00:39:40.195319 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-17 00:39:40.195331 | orchestrator | Tuesday 17 March 2026 00:39:35 +0000 (0:00:01.369) 0:06:25.170 ********* 2026-03-17 00:39:40.195342 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:40.195372 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:39:40.195384 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:39:40.195395 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:39:40.195406 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:39:40.195417 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:39:40.195427 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:39:40.195438 | orchestrator | 2026-03-17 00:39:40.195492 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-17 00:39:40.195505 | orchestrator | Tuesday 17 March 2026 00:39:36 +0000 (0:00:01.264) 0:06:26.435 ********* 2026-03-17 00:39:40.195516 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:40.195527 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:39:40.195538 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:39:40.195549 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:39:40.195560 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:39:40.195571 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:39:40.195582 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:39:40.195593 | orchestrator | 2026-03-17 00:39:40.195604 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-17 00:39:40.195614 | orchestrator | Tuesday 17 March 2026 00:39:37 +0000 (0:00:01.062) 0:06:27.497 ********* 2026-03-17 00:39:40.195626 | orchestrator | ok: [testbed-manager] 2026-03-17 00:39:40.195641 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:39:40.195659 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:39:40.195670 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:39:40.195681 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:39:40.195692 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:39:40.195702 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:39:40.195713 | orchestrator | 2026-03-17 00:39:40.195724 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-17 00:39:40.195735 | orchestrator | Tuesday 17 March 2026 00:39:39 +0000 (0:00:01.133) 0:06:28.631 ********* 2026-03-17 00:39:40.195746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:39:40.195758 | orchestrator | 2026-03-17 00:39:40.195769 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:39:40.195780 | orchestrator | Tuesday 17 March 2026 00:39:39 +0000 (0:00:00.865) 0:06:29.496 ********* 2026-03-17 00:39:40.195791 | orchestrator | 2026-03-17 00:39:40.195802 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:39:40.195813 | orchestrator | Tuesday 17 March 2026 00:39:39 +0000 (0:00:00.038) 0:06:29.535 ********* 2026-03-17 00:39:40.195824 | orchestrator | 2026-03-17 00:39:40.195835 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:39:40.195845 | orchestrator | Tuesday 17 March 2026 00:39:40 +0000 (0:00:00.187) 0:06:29.723 ********* 2026-03-17 00:39:40.195865 | orchestrator | 2026-03-17 00:39:40.195876 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:39:40.195896 | orchestrator | Tuesday 17 March 2026 00:39:40 +0000 (0:00:00.039) 0:06:29.762 ********* 2026-03-17 00:40:06.061698 | orchestrator | 2026-03-17 00:40:06.061799 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:40:06.061815 | orchestrator | Tuesday 17 March 2026 00:39:40 +0000 (0:00:00.039) 0:06:29.802 ********* 2026-03-17 00:40:06.061826 | orchestrator | 2026-03-17 00:40:06.061838 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:40:06.061849 | orchestrator | Tuesday 17 March 2026 00:39:40 +0000 (0:00:00.046) 0:06:29.848 ********* 2026-03-17 00:40:06.061861 | orchestrator | 2026-03-17 00:40:06.061871 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-17 00:40:06.061883 | orchestrator | Tuesday 17 March 2026 00:39:40 +0000 (0:00:00.038) 0:06:29.887 ********* 2026-03-17 00:40:06.061894 | orchestrator | 2026-03-17 00:40:06.061904 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-17 00:40:06.061915 | orchestrator | Tuesday 17 March 2026 00:39:40 +0000 (0:00:00.039) 0:06:29.926 ********* 2026-03-17 00:40:06.061926 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:06.061938 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:06.061949 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:06.061959 | orchestrator | 2026-03-17 00:40:06.061970 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-17 00:40:06.061981 | orchestrator | Tuesday 17 March 2026 00:39:41 +0000 (0:00:01.189) 0:06:31.116 ********* 2026-03-17 00:40:06.061992 | orchestrator | changed: [testbed-manager] 2026-03-17 00:40:06.062004 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:06.062066 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:06.062080 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:06.062091 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:06.062102 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:06.062113 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:06.062123 | orchestrator | 2026-03-17 00:40:06.062134 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-17 00:40:06.062145 | orchestrator | Tuesday 17 March 2026 00:39:42 +0000 (0:00:01.435) 0:06:32.551 ********* 2026-03-17 00:40:06.062156 | orchestrator | changed: [testbed-manager] 2026-03-17 00:40:06.062167 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:06.062178 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:06.062189 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:06.062199 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:06.062210 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:06.062221 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:06.062232 | orchestrator | 2026-03-17 00:40:06.062245 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-17 00:40:06.062258 | orchestrator | Tuesday 17 March 2026 00:39:44 +0000 (0:00:01.144) 0:06:33.696 ********* 2026-03-17 00:40:06.062271 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:06.062283 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:06.062295 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:06.062309 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:06.062363 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:06.062377 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:06.062407 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:06.062420 | orchestrator | 2026-03-17 00:40:06.062434 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-17 00:40:06.062447 | orchestrator | Tuesday 17 March 2026 00:39:46 +0000 (0:00:02.119) 0:06:35.815 ********* 2026-03-17 00:40:06.062460 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:06.062472 | orchestrator | 2026-03-17 00:40:06.062485 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-17 00:40:06.062498 | orchestrator | Tuesday 17 March 2026 00:39:46 +0000 (0:00:00.091) 0:06:35.907 ********* 2026-03-17 00:40:06.062533 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:06.062547 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:06.062560 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:06.062572 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:06.062585 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:06.062598 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:06.062608 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:06.062619 | orchestrator | 2026-03-17 00:40:06.062630 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-17 00:40:06.062642 | orchestrator | Tuesday 17 March 2026 00:39:47 +0000 (0:00:01.207) 0:06:37.115 ********* 2026-03-17 00:40:06.062653 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:06.062664 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:06.062675 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:06.062685 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:06.062696 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:06.062706 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:06.062717 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:06.062728 | orchestrator | 2026-03-17 00:40:06.062738 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-17 00:40:06.062749 | orchestrator | Tuesday 17 March 2026 00:39:48 +0000 (0:00:00.511) 0:06:37.626 ********* 2026-03-17 00:40:06.062761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:40:06.062775 | orchestrator | 2026-03-17 00:40:06.062786 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-17 00:40:06.062797 | orchestrator | Tuesday 17 March 2026 00:39:48 +0000 (0:00:00.844) 0:06:38.471 ********* 2026-03-17 00:40:06.062807 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:06.062818 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:06.062829 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:06.062840 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:06.062851 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:06.062862 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:06.062872 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:06.062883 | orchestrator | 2026-03-17 00:40:06.062894 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-17 00:40:06.062905 | orchestrator | Tuesday 17 March 2026 00:39:49 +0000 (0:00:01.052) 0:06:39.523 ********* 2026-03-17 00:40:06.062916 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-17 00:40:06.062944 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-17 00:40:06.062956 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-17 00:40:06.062967 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-17 00:40:06.062978 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-17 00:40:06.062989 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-17 00:40:06.063000 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-17 00:40:06.063010 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-17 00:40:06.063021 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-17 00:40:06.063032 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-17 00:40:06.063043 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-17 00:40:06.063054 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-17 00:40:06.063064 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-17 00:40:06.063075 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-17 00:40:06.063086 | orchestrator | 2026-03-17 00:40:06.063097 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-17 00:40:06.063108 | orchestrator | Tuesday 17 March 2026 00:39:52 +0000 (0:00:02.640) 0:06:42.163 ********* 2026-03-17 00:40:06.063127 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:06.063138 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:06.063149 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:06.063160 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:06.063170 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:06.063181 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:06.063192 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:06.063203 | orchestrator | 2026-03-17 00:40:06.063214 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-17 00:40:06.063225 | orchestrator | Tuesday 17 March 2026 00:39:53 +0000 (0:00:00.474) 0:06:42.638 ********* 2026-03-17 00:40:06.063237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:40:06.063250 | orchestrator | 2026-03-17 00:40:06.063261 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-17 00:40:06.063272 | orchestrator | Tuesday 17 March 2026 00:39:53 +0000 (0:00:00.838) 0:06:43.476 ********* 2026-03-17 00:40:06.063282 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:06.063294 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:06.063305 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:06.063315 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:06.063358 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:06.063370 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:06.063386 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:06.063397 | orchestrator | 2026-03-17 00:40:06.063409 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-17 00:40:06.063419 | orchestrator | Tuesday 17 March 2026 00:39:54 +0000 (0:00:00.762) 0:06:44.239 ********* 2026-03-17 00:40:06.063430 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:06.063441 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:06.063452 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:06.063463 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:06.063474 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:06.063485 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:06.063495 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:06.063506 | orchestrator | 2026-03-17 00:40:06.063517 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-17 00:40:06.063528 | orchestrator | Tuesday 17 March 2026 00:39:55 +0000 (0:00:00.767) 0:06:45.007 ********* 2026-03-17 00:40:06.063539 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:06.063550 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:06.063561 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:06.063572 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:06.063583 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:06.063594 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:06.063604 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:06.063615 | orchestrator | 2026-03-17 00:40:06.063626 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-17 00:40:06.063637 | orchestrator | Tuesday 17 March 2026 00:39:55 +0000 (0:00:00.419) 0:06:45.427 ********* 2026-03-17 00:40:06.063647 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:06.063672 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:06.063683 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:06.063694 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:06.063705 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:06.063715 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:06.063726 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:06.063737 | orchestrator | 2026-03-17 00:40:06.063748 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-17 00:40:06.063759 | orchestrator | Tuesday 17 March 2026 00:39:57 +0000 (0:00:01.465) 0:06:46.893 ********* 2026-03-17 00:40:06.063777 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:06.063788 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:06.063799 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:06.063809 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:06.063820 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:06.063831 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:06.063841 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:06.063852 | orchestrator | 2026-03-17 00:40:06.063863 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-17 00:40:06.063874 | orchestrator | Tuesday 17 March 2026 00:39:57 +0000 (0:00:00.554) 0:06:47.447 ********* 2026-03-17 00:40:06.063884 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:06.063895 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:06.063906 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:06.063917 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:06.063927 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:06.063938 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:06.063955 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:37.136981 | orchestrator | 2026-03-17 00:40:37.137039 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-17 00:40:37.137045 | orchestrator | Tuesday 17 March 2026 00:40:06 +0000 (0:00:08.249) 0:06:55.697 ********* 2026-03-17 00:40:37.137050 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137055 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:37.137059 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:37.137063 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:37.137067 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:37.137071 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:37.137075 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:37.137081 | orchestrator | 2026-03-17 00:40:37.137087 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-17 00:40:37.137093 | orchestrator | Tuesday 17 March 2026 00:40:07 +0000 (0:00:01.291) 0:06:56.988 ********* 2026-03-17 00:40:37.137099 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137106 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:37.137112 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:37.137118 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:37.137124 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:37.137131 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:37.137137 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:37.137144 | orchestrator | 2026-03-17 00:40:37.137150 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-17 00:40:37.137157 | orchestrator | Tuesday 17 March 2026 00:40:09 +0000 (0:00:01.616) 0:06:58.604 ********* 2026-03-17 00:40:37.137164 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137169 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:37.137173 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:37.137177 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:37.137180 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:37.137184 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:37.137188 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:37.137192 | orchestrator | 2026-03-17 00:40:37.137196 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 00:40:37.137200 | orchestrator | Tuesday 17 March 2026 00:40:10 +0000 (0:00:01.672) 0:07:00.277 ********* 2026-03-17 00:40:37.137204 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137208 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:37.137212 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:37.137215 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:37.137219 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:37.137223 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:37.137227 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:37.137230 | orchestrator | 2026-03-17 00:40:37.137234 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 00:40:37.137251 | orchestrator | Tuesday 17 March 2026 00:40:11 +0000 (0:00:00.850) 0:07:01.128 ********* 2026-03-17 00:40:37.137255 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:37.137259 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:37.137263 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:37.137267 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:37.137271 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:37.137275 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:37.137334 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:37.137342 | orchestrator | 2026-03-17 00:40:37.137349 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-17 00:40:37.137356 | orchestrator | Tuesday 17 March 2026 00:40:12 +0000 (0:00:00.747) 0:07:01.875 ********* 2026-03-17 00:40:37.137361 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:37.137365 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:37.137368 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:37.137372 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:37.137376 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:37.137380 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:37.137383 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:37.137387 | orchestrator | 2026-03-17 00:40:37.137391 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-17 00:40:37.137395 | orchestrator | Tuesday 17 March 2026 00:40:12 +0000 (0:00:00.615) 0:07:02.490 ********* 2026-03-17 00:40:37.137399 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137403 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:37.137406 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:37.137410 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:37.137414 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:37.137418 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:37.137421 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:37.137441 | orchestrator | 2026-03-17 00:40:37.137445 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-17 00:40:37.137449 | orchestrator | Tuesday 17 March 2026 00:40:13 +0000 (0:00:00.488) 0:07:02.979 ********* 2026-03-17 00:40:37.137453 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137458 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:37.137464 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:37.137473 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:37.137480 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:37.137486 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:37.137492 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:37.137498 | orchestrator | 2026-03-17 00:40:37.137505 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-17 00:40:37.137511 | orchestrator | Tuesday 17 March 2026 00:40:13 +0000 (0:00:00.487) 0:07:03.466 ********* 2026-03-17 00:40:37.137518 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137532 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:37.137543 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:37.137550 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:37.137556 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:37.137562 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:37.137568 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:37.137574 | orchestrator | 2026-03-17 00:40:37.137581 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-17 00:40:37.137587 | orchestrator | Tuesday 17 March 2026 00:40:14 +0000 (0:00:00.486) 0:07:03.952 ********* 2026-03-17 00:40:37.137593 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137599 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:37.137604 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:37.137610 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:37.137617 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:37.137623 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:37.137630 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:37.137636 | orchestrator | 2026-03-17 00:40:37.137655 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-17 00:40:37.137665 | orchestrator | Tuesday 17 March 2026 00:40:19 +0000 (0:00:05.611) 0:07:09.564 ********* 2026-03-17 00:40:37.137669 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:40:37.137673 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:40:37.137677 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:40:37.137680 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:40:37.137684 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:40:37.137688 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:40:37.137692 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:40:37.137696 | orchestrator | 2026-03-17 00:40:37.137699 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-17 00:40:37.137703 | orchestrator | Tuesday 17 March 2026 00:40:20 +0000 (0:00:00.660) 0:07:10.224 ********* 2026-03-17 00:40:37.137719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:40:37.137727 | orchestrator | 2026-03-17 00:40:37.137732 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-17 00:40:37.137738 | orchestrator | Tuesday 17 March 2026 00:40:21 +0000 (0:00:00.687) 0:07:10.912 ********* 2026-03-17 00:40:37.137743 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137749 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:37.137754 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:37.137760 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:37.137766 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:37.137772 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:37.137778 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:37.137785 | orchestrator | 2026-03-17 00:40:37.137791 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-17 00:40:37.137797 | orchestrator | Tuesday 17 March 2026 00:40:23 +0000 (0:00:01.945) 0:07:12.857 ********* 2026-03-17 00:40:37.137800 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137804 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:37.137808 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:37.137811 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:37.137816 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:37.137823 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:37.137829 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:37.137835 | orchestrator | 2026-03-17 00:40:37.137841 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-17 00:40:37.137847 | orchestrator | Tuesday 17 March 2026 00:40:24 +0000 (0:00:01.143) 0:07:14.001 ********* 2026-03-17 00:40:37.137853 | orchestrator | ok: [testbed-manager] 2026-03-17 00:40:37.137859 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:40:37.137865 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:40:37.137872 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:40:37.137878 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:40:37.137885 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:40:37.137892 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:40:37.137899 | orchestrator | 2026-03-17 00:40:37.137907 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-17 00:40:37.137913 | orchestrator | Tuesday 17 March 2026 00:40:25 +0000 (0:00:00.861) 0:07:14.862 ********* 2026-03-17 00:40:37.137920 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:40:37.137928 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:40:37.137935 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:40:37.137941 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:40:37.137953 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:40:37.137957 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:40:37.137961 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-17 00:40:37.137965 | orchestrator | 2026-03-17 00:40:37.137969 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-17 00:40:37.137973 | orchestrator | Tuesday 17 March 2026 00:40:26 +0000 (0:00:01.596) 0:07:16.458 ********* 2026-03-17 00:40:37.137977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:40:37.137981 | orchestrator | 2026-03-17 00:40:37.137985 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-17 00:40:37.137988 | orchestrator | Tuesday 17 March 2026 00:40:27 +0000 (0:00:00.832) 0:07:17.290 ********* 2026-03-17 00:40:37.137992 | orchestrator | changed: [testbed-manager] 2026-03-17 00:40:37.137996 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:40:37.138000 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:40:37.138003 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:40:37.138007 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:40:37.138011 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:40:37.138043 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:40:37.138048 | orchestrator | 2026-03-17 00:40:37.138058 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-17 00:41:06.462302 | orchestrator | Tuesday 17 March 2026 00:40:37 +0000 (0:00:09.414) 0:07:26.705 ********* 2026-03-17 00:41:06.462540 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:06.462569 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:06.462581 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:06.462592 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:06.462603 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:06.462614 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:06.462625 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:06.462636 | orchestrator | 2026-03-17 00:41:06.462648 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-17 00:41:06.462659 | orchestrator | Tuesday 17 March 2026 00:40:38 +0000 (0:00:01.706) 0:07:28.411 ********* 2026-03-17 00:41:06.462670 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:06.462681 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:06.462692 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:06.462703 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:06.462714 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:06.462725 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:06.462736 | orchestrator | 2026-03-17 00:41:06.462747 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-17 00:41:06.462758 | orchestrator | Tuesday 17 March 2026 00:40:40 +0000 (0:00:01.412) 0:07:29.824 ********* 2026-03-17 00:41:06.462769 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:06.462832 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:06.462843 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:06.462854 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:06.462865 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:06.462876 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:06.462886 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:06.462897 | orchestrator | 2026-03-17 00:41:06.462908 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-17 00:41:06.462919 | orchestrator | 2026-03-17 00:41:06.462930 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-17 00:41:06.462975 | orchestrator | Tuesday 17 March 2026 00:40:41 +0000 (0:00:01.093) 0:07:30.917 ********* 2026-03-17 00:41:06.462987 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:41:06.462998 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:41:06.463008 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:41:06.463020 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:41:06.463030 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:41:06.463041 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:41:06.463051 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:41:06.463062 | orchestrator | 2026-03-17 00:41:06.463073 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-17 00:41:06.463084 | orchestrator | 2026-03-17 00:41:06.463094 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-17 00:41:06.463105 | orchestrator | Tuesday 17 March 2026 00:40:41 +0000 (0:00:00.435) 0:07:31.353 ********* 2026-03-17 00:41:06.463116 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:06.463127 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:06.463138 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:06.463149 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:06.463175 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:06.463186 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:06.463197 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:06.463208 | orchestrator | 2026-03-17 00:41:06.463219 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-17 00:41:06.463230 | orchestrator | Tuesday 17 March 2026 00:40:43 +0000 (0:00:01.395) 0:07:32.748 ********* 2026-03-17 00:41:06.463241 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:06.463274 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:06.463285 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:06.463296 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:06.463306 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:06.463317 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:06.463328 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:06.463339 | orchestrator | 2026-03-17 00:41:06.463350 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-17 00:41:06.463361 | orchestrator | Tuesday 17 March 2026 00:40:45 +0000 (0:00:02.193) 0:07:34.942 ********* 2026-03-17 00:41:06.463376 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:41:06.463396 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:41:06.463416 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:41:06.463435 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:41:06.463456 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:41:06.463476 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:41:06.463495 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:41:06.463513 | orchestrator | 2026-03-17 00:41:06.463524 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-17 00:41:06.463590 | orchestrator | Tuesday 17 March 2026 00:40:45 +0000 (0:00:00.458) 0:07:35.401 ********* 2026-03-17 00:41:06.463604 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:41:06.463617 | orchestrator | 2026-03-17 00:41:06.463628 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-17 00:41:06.463639 | orchestrator | Tuesday 17 March 2026 00:40:46 +0000 (0:00:00.771) 0:07:36.173 ********* 2026-03-17 00:41:06.463652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:41:06.463665 | orchestrator | 2026-03-17 00:41:06.463676 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-17 00:41:06.463687 | orchestrator | Tuesday 17 March 2026 00:40:47 +0000 (0:00:00.867) 0:07:37.041 ********* 2026-03-17 00:41:06.463709 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:06.463720 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:06.463731 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:06.463742 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:06.463753 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:06.463763 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:06.463774 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:06.463785 | orchestrator | 2026-03-17 00:41:06.463818 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-17 00:41:06.463829 | orchestrator | Tuesday 17 March 2026 00:40:56 +0000 (0:00:08.843) 0:07:45.884 ********* 2026-03-17 00:41:06.463840 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:06.463851 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:06.463861 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:06.463872 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:06.463883 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:06.463893 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:06.463904 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:06.463915 | orchestrator | 2026-03-17 00:41:06.463925 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-17 00:41:06.463936 | orchestrator | Tuesday 17 March 2026 00:40:57 +0000 (0:00:00.809) 0:07:46.694 ********* 2026-03-17 00:41:06.463947 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:06.463957 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:06.463968 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:06.463979 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:06.463989 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:06.464000 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:06.464010 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:06.464021 | orchestrator | 2026-03-17 00:41:06.464032 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-17 00:41:06.464047 | orchestrator | Tuesday 17 March 2026 00:40:58 +0000 (0:00:01.299) 0:07:47.993 ********* 2026-03-17 00:41:06.464066 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:06.464085 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:06.464096 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:06.464107 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:06.464118 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:06.464128 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:06.464139 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:06.464149 | orchestrator | 2026-03-17 00:41:06.464160 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-17 00:41:06.464171 | orchestrator | Tuesday 17 March 2026 00:41:00 +0000 (0:00:01.726) 0:07:49.719 ********* 2026-03-17 00:41:06.464181 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:06.464192 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:06.464202 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:06.464213 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:06.464223 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:06.464234 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:06.464304 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:06.464320 | orchestrator | 2026-03-17 00:41:06.464331 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-17 00:41:06.464342 | orchestrator | Tuesday 17 March 2026 00:41:01 +0000 (0:00:01.153) 0:07:50.872 ********* 2026-03-17 00:41:06.464353 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:06.464363 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:06.464374 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:06.464384 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:06.464402 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:06.464413 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:06.464423 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:06.464434 | orchestrator | 2026-03-17 00:41:06.464454 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-17 00:41:06.464465 | orchestrator | 2026-03-17 00:41:06.464475 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-17 00:41:06.464486 | orchestrator | Tuesday 17 March 2026 00:41:02 +0000 (0:00:01.049) 0:07:51.922 ********* 2026-03-17 00:41:06.464497 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:41:06.464508 | orchestrator | 2026-03-17 00:41:06.464519 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-17 00:41:06.464529 | orchestrator | Tuesday 17 March 2026 00:41:03 +0000 (0:00:00.774) 0:07:52.696 ********* 2026-03-17 00:41:06.464540 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:06.464550 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:06.464561 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:06.464572 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:06.464582 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:06.464593 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:06.464603 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:06.464614 | orchestrator | 2026-03-17 00:41:06.464624 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-17 00:41:06.464635 | orchestrator | Tuesday 17 March 2026 00:41:03 +0000 (0:00:00.728) 0:07:53.425 ********* 2026-03-17 00:41:06.464645 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:06.464656 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:06.464667 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:06.464677 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:06.464688 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:06.464698 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:06.464709 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:06.464720 | orchestrator | 2026-03-17 00:41:06.464730 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-17 00:41:06.464741 | orchestrator | Tuesday 17 March 2026 00:41:04 +0000 (0:00:01.132) 0:07:54.557 ********* 2026-03-17 00:41:06.464752 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:41:06.464762 | orchestrator | 2026-03-17 00:41:06.464773 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-17 00:41:06.464784 | orchestrator | Tuesday 17 March 2026 00:41:05 +0000 (0:00:00.712) 0:07:55.269 ********* 2026-03-17 00:41:06.464794 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:06.464805 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:06.464815 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:06.464826 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:06.464836 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:06.464847 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:06.464857 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:06.464868 | orchestrator | 2026-03-17 00:41:06.464887 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-17 00:41:07.700169 | orchestrator | Tuesday 17 March 2026 00:41:06 +0000 (0:00:00.761) 0:07:56.031 ********* 2026-03-17 00:41:07.700362 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:07.700395 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:07.700417 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:07.700438 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:07.700458 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:07.700478 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:07.700497 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:07.700517 | orchestrator | 2026-03-17 00:41:07.700537 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:41:07.700558 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-17 00:41:07.700615 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:41:07.700636 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-17 00:41:07.700647 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-17 00:41:07.700658 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:41:07.700669 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:41:07.700680 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-17 00:41:07.700691 | orchestrator | 2026-03-17 00:41:07.700707 | orchestrator | 2026-03-17 00:41:07.700726 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:41:07.700746 | orchestrator | Tuesday 17 March 2026 00:41:07 +0000 (0:00:01.050) 0:07:57.081 ********* 2026-03-17 00:41:07.700765 | orchestrator | =============================================================================== 2026-03-17 00:41:07.700785 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.80s 2026-03-17 00:41:07.700805 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.40s 2026-03-17 00:41:07.700843 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.74s 2026-03-17 00:41:07.700858 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.71s 2026-03-17 00:41:07.700871 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.21s 2026-03-17 00:41:07.700884 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.06s 2026-03-17 00:41:07.700897 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.05s 2026-03-17 00:41:07.700910 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.60s 2026-03-17 00:41:07.700921 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.41s 2026-03-17 00:41:07.700932 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.98s 2026-03-17 00:41:07.700952 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.84s 2026-03-17 00:41:07.700970 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.55s 2026-03-17 00:41:07.700989 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.25s 2026-03-17 00:41:07.701008 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.23s 2026-03-17 00:41:07.701027 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.99s 2026-03-17 00:41:07.701045 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.69s 2026-03-17 00:41:07.701061 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.46s 2026-03-17 00:41:07.701078 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.65s 2026-03-17 00:41:07.701096 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.61s 2026-03-17 00:41:07.701115 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.48s 2026-03-17 00:41:07.821218 | orchestrator | + osism apply fail2ban 2026-03-17 00:41:19.241926 | orchestrator | 2026-03-17 00:41:19 | INFO  | Prepare task for execution of fail2ban. 2026-03-17 00:41:19.319283 | orchestrator | 2026-03-17 00:41:19 | INFO  | Task baf4115d-b226-4a3a-9f27-ab0e11ba93f0 (fail2ban) was prepared for execution. 2026-03-17 00:41:19.319409 | orchestrator | 2026-03-17 00:41:19 | INFO  | It takes a moment until task baf4115d-b226-4a3a-9f27-ab0e11ba93f0 (fail2ban) has been started and output is visible here. 2026-03-17 00:41:39.425379 | orchestrator | 2026-03-17 00:41:39.425490 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-17 00:41:39.425507 | orchestrator | 2026-03-17 00:41:39.425519 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-17 00:41:39.425531 | orchestrator | Tuesday 17 March 2026 00:41:22 +0000 (0:00:00.286) 0:00:00.286 ********* 2026-03-17 00:41:39.425544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:41:39.425558 | orchestrator | 2026-03-17 00:41:39.425569 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-17 00:41:39.425580 | orchestrator | Tuesday 17 March 2026 00:41:23 +0000 (0:00:00.838) 0:00:01.124 ********* 2026-03-17 00:41:39.425591 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:39.425603 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:39.425614 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:39.425625 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:39.425636 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:39.425647 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:39.425657 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:39.425668 | orchestrator | 2026-03-17 00:41:39.425679 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-17 00:41:39.425690 | orchestrator | Tuesday 17 March 2026 00:41:34 +0000 (0:00:11.664) 0:00:12.788 ********* 2026-03-17 00:41:39.425701 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:39.425712 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:39.425723 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:39.425734 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:39.425744 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:39.425755 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:39.425766 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:39.425777 | orchestrator | 2026-03-17 00:41:39.425788 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-17 00:41:39.425799 | orchestrator | Tuesday 17 March 2026 00:41:36 +0000 (0:00:01.562) 0:00:14.351 ********* 2026-03-17 00:41:39.425810 | orchestrator | ok: [testbed-manager] 2026-03-17 00:41:39.425822 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:41:39.425833 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:41:39.425844 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:41:39.425854 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:41:39.425865 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:41:39.425876 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:41:39.425887 | orchestrator | 2026-03-17 00:41:39.425898 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-17 00:41:39.425910 | orchestrator | Tuesday 17 March 2026 00:41:37 +0000 (0:00:01.197) 0:00:15.548 ********* 2026-03-17 00:41:39.425923 | orchestrator | changed: [testbed-manager] 2026-03-17 00:41:39.425936 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:41:39.425948 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:41:39.425960 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:41:39.425972 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:41:39.425984 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:41:39.425997 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:41:39.426009 | orchestrator | 2026-03-17 00:41:39.426086 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:41:39.426117 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:41:39.426131 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:41:39.426171 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:41:39.426184 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:41:39.426197 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:41:39.426232 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:41:39.426245 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:41:39.426258 | orchestrator | 2026-03-17 00:41:39.426271 | orchestrator | 2026-03-17 00:41:39.426282 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:41:39.426293 | orchestrator | Tuesday 17 March 2026 00:41:39 +0000 (0:00:01.560) 0:00:17.109 ********* 2026-03-17 00:41:39.426304 | orchestrator | =============================================================================== 2026-03-17 00:41:39.426315 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.66s 2026-03-17 00:41:39.426326 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.56s 2026-03-17 00:41:39.426337 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.56s 2026-03-17 00:41:39.426347 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.20s 2026-03-17 00:41:39.426358 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 0.84s 2026-03-17 00:41:39.536536 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-17 00:41:39.536630 | orchestrator | + osism apply network 2026-03-17 00:41:50.800000 | orchestrator | 2026-03-17 00:41:50 | INFO  | Prepare task for execution of network. 2026-03-17 00:41:50.872879 | orchestrator | 2026-03-17 00:41:50 | INFO  | Task 658705ee-ca6a-4b25-a7a3-29788657c7ad (network) was prepared for execution. 2026-03-17 00:41:50.872974 | orchestrator | 2026-03-17 00:41:50 | INFO  | It takes a moment until task 658705ee-ca6a-4b25-a7a3-29788657c7ad (network) has been started and output is visible here. 2026-03-17 00:42:17.731852 | orchestrator | 2026-03-17 00:42:17.731959 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-17 00:42:17.731976 | orchestrator | 2026-03-17 00:42:17.731988 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-17 00:42:17.731999 | orchestrator | Tuesday 17 March 2026 00:41:53 +0000 (0:00:00.287) 0:00:00.287 ********* 2026-03-17 00:42:17.732011 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:17.732023 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:17.732034 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:17.732045 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:17.732056 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:17.732067 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:17.732078 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:17.732088 | orchestrator | 2026-03-17 00:42:17.732099 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-17 00:42:17.732110 | orchestrator | Tuesday 17 March 2026 00:41:54 +0000 (0:00:00.550) 0:00:00.837 ********* 2026-03-17 00:42:17.732123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:42:17.732137 | orchestrator | 2026-03-17 00:42:17.732148 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-17 00:42:17.732218 | orchestrator | Tuesday 17 March 2026 00:41:55 +0000 (0:00:01.035) 0:00:01.872 ********* 2026-03-17 00:42:17.732256 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:17.732267 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:17.732278 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:17.732289 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:17.732300 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:17.732310 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:17.732321 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:17.732331 | orchestrator | 2026-03-17 00:42:17.732342 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-17 00:42:17.732353 | orchestrator | Tuesday 17 March 2026 00:41:57 +0000 (0:00:02.413) 0:00:04.285 ********* 2026-03-17 00:42:17.732364 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:17.732375 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:17.732386 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:17.732398 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:17.732410 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:17.732422 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:17.732434 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:17.732447 | orchestrator | 2026-03-17 00:42:17.732459 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-17 00:42:17.732471 | orchestrator | Tuesday 17 March 2026 00:41:59 +0000 (0:00:01.660) 0:00:05.946 ********* 2026-03-17 00:42:17.732483 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-17 00:42:17.732497 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-17 00:42:17.732509 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-17 00:42:17.732522 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-17 00:42:17.732533 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-17 00:42:17.732545 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-17 00:42:17.732557 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-17 00:42:17.732569 | orchestrator | 2026-03-17 00:42:17.732581 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-17 00:42:17.732593 | orchestrator | Tuesday 17 March 2026 00:42:00 +0000 (0:00:01.130) 0:00:07.077 ********* 2026-03-17 00:42:17.732605 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:42:17.732618 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 00:42:17.732630 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:42:17.732643 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 00:42:17.732654 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 00:42:17.732667 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 00:42:17.732679 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 00:42:17.732691 | orchestrator | 2026-03-17 00:42:17.732703 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-17 00:42:17.732715 | orchestrator | Tuesday 17 March 2026 00:42:04 +0000 (0:00:03.345) 0:00:10.422 ********* 2026-03-17 00:42:17.732728 | orchestrator | changed: [testbed-manager] 2026-03-17 00:42:17.732740 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:42:17.732752 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:42:17.732764 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:42:17.732774 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:42:17.732785 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:42:17.732796 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:42:17.732806 | orchestrator | 2026-03-17 00:42:17.732817 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-17 00:42:17.732828 | orchestrator | Tuesday 17 March 2026 00:42:05 +0000 (0:00:01.630) 0:00:12.053 ********* 2026-03-17 00:42:17.732838 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:42:17.732849 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:42:17.732860 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 00:42:17.732870 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 00:42:17.732881 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 00:42:17.732892 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 00:42:17.732910 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 00:42:17.732921 | orchestrator | 2026-03-17 00:42:17.732932 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-17 00:42:17.732943 | orchestrator | Tuesday 17 March 2026 00:42:07 +0000 (0:00:01.951) 0:00:14.004 ********* 2026-03-17 00:42:17.732954 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:17.732964 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:17.732975 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:17.732986 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:17.732997 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:17.733008 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:17.733019 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:17.733029 | orchestrator | 2026-03-17 00:42:17.733041 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-17 00:42:17.733088 | orchestrator | Tuesday 17 March 2026 00:42:08 +0000 (0:00:00.979) 0:00:14.983 ********* 2026-03-17 00:42:17.733101 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:42:17.733112 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:42:17.733123 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:42:17.733133 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:42:17.733144 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:42:17.733155 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:42:17.733185 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:42:17.733196 | orchestrator | 2026-03-17 00:42:17.733207 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-17 00:42:17.733218 | orchestrator | Tuesday 17 March 2026 00:42:09 +0000 (0:00:00.776) 0:00:15.760 ********* 2026-03-17 00:42:17.733229 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:17.733240 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:17.733251 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:17.733261 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:17.733272 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:17.733283 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:17.733294 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:17.733304 | orchestrator | 2026-03-17 00:42:17.733315 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-17 00:42:17.733326 | orchestrator | Tuesday 17 March 2026 00:42:11 +0000 (0:00:02.115) 0:00:17.875 ********* 2026-03-17 00:42:17.733337 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:42:17.733348 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:42:17.733359 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:42:17.733370 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:42:17.733380 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:42:17.733391 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:42:17.733403 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-03-17 00:42:17.733415 | orchestrator | 2026-03-17 00:42:17.733426 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-17 00:42:17.733437 | orchestrator | Tuesday 17 March 2026 00:42:12 +0000 (0:00:00.929) 0:00:18.805 ********* 2026-03-17 00:42:17.733448 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:17.733459 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:42:17.733469 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:42:17.733480 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:42:17.733491 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:42:17.733502 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:42:17.733512 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:42:17.733523 | orchestrator | 2026-03-17 00:42:17.733534 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-17 00:42:17.733545 | orchestrator | Tuesday 17 March 2026 00:42:13 +0000 (0:00:01.436) 0:00:20.241 ********* 2026-03-17 00:42:17.733562 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:42:17.733583 | orchestrator | 2026-03-17 00:42:17.733594 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-17 00:42:17.733605 | orchestrator | Tuesday 17 March 2026 00:42:15 +0000 (0:00:01.205) 0:00:21.447 ********* 2026-03-17 00:42:17.733616 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:17.733627 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:17.733638 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:17.733649 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:17.733659 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:17.733670 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:17.733681 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:17.733692 | orchestrator | 2026-03-17 00:42:17.733703 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-17 00:42:17.733713 | orchestrator | Tuesday 17 March 2026 00:42:16 +0000 (0:00:01.032) 0:00:22.480 ********* 2026-03-17 00:42:17.733724 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:17.733735 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:17.733746 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:17.733757 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:17.733767 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:17.733778 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:17.733789 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:17.733799 | orchestrator | 2026-03-17 00:42:17.733810 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-17 00:42:17.733821 | orchestrator | Tuesday 17 March 2026 00:42:16 +0000 (0:00:00.657) 0:00:23.138 ********* 2026-03-17 00:42:17.733832 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:42:17.733843 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:42:17.733854 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:42:17.733864 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:42:17.733875 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:42:17.733886 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:42:17.733897 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:42:17.733908 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:42:17.733918 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-17 00:42:17.733929 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:42:17.733940 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:42:17.733951 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:42:17.733962 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:42:17.733972 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-17 00:42:17.733983 | orchestrator | 2026-03-17 00:42:17.734001 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-17 00:42:32.638438 | orchestrator | Tuesday 17 March 2026 00:42:17 +0000 (0:00:00.976) 0:00:24.114 ********* 2026-03-17 00:42:32.638542 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:42:32.638559 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:42:32.638570 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:42:32.638581 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:42:32.638592 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:42:32.638603 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:42:32.638614 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:42:32.638625 | orchestrator | 2026-03-17 00:42:32.638638 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-17 00:42:32.638672 | orchestrator | Tuesday 17 March 2026 00:42:18 +0000 (0:00:00.636) 0:00:24.750 ********* 2026-03-17 00:42:32.638685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-5, testbed-node-2, testbed-node-3, testbed-node-4 2026-03-17 00:42:32.638699 | orchestrator | 2026-03-17 00:42:32.638711 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-17 00:42:32.638722 | orchestrator | Tuesday 17 March 2026 00:42:22 +0000 (0:00:03.899) 0:00:28.649 ********* 2026-03-17 00:42:32.638734 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-17 00:42:32.638748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.638759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.638782 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-17 00:42:32.638794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.638805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.638816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.638827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.638838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-17 00:42:32.638855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-17 00:42:32.638867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-17 00:42:32.638896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-17 00:42:32.638929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-17 00:42:32.638941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-17 00:42:32.638952 | orchestrator | 2026-03-17 00:42:32.638965 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-17 00:42:32.638978 | orchestrator | Tuesday 17 March 2026 00:42:27 +0000 (0:00:05.159) 0:00:33.808 ********* 2026-03-17 00:42:32.638991 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-03-17 00:42:32.639003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.639017 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-03-17 00:42:32.639030 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.639047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.639060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.639072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.639085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-03-17 00:42:32.639098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-03-17 00:42:32.639110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-03-17 00:42:32.639123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-03-17 00:42:32.639184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-03-17 00:42:32.639212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-03-17 00:42:44.165287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-03-17 00:42:44.165417 | orchestrator | 2026-03-17 00:42:44.165435 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-17 00:42:44.165450 | orchestrator | Tuesday 17 March 2026 00:42:32 +0000 (0:00:05.356) 0:00:39.165 ********* 2026-03-17 00:42:44.165463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:42:44.165475 | orchestrator | 2026-03-17 00:42:44.165487 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-17 00:42:44.165498 | orchestrator | Tuesday 17 March 2026 00:42:33 +0000 (0:00:01.035) 0:00:40.200 ********* 2026-03-17 00:42:44.165509 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:44.165521 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:44.165532 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:44.165543 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:44.165554 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:44.165564 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:44.165575 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:44.165586 | orchestrator | 2026-03-17 00:42:44.165597 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-17 00:42:44.165608 | orchestrator | Tuesday 17 March 2026 00:42:34 +0000 (0:00:00.950) 0:00:41.151 ********* 2026-03-17 00:42:44.165621 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:42:44.165633 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:42:44.165646 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:42:44.165659 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:42:44.165672 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:42:44.165702 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:42:44.165714 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:42:44.165727 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:42:44.165740 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:42:44.165752 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:42:44.165763 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:42:44.165774 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:42:44.165785 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:42:44.165796 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:42:44.165807 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:42:44.165842 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:42:44.165853 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:42:44.165864 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:42:44.165875 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:42:44.165886 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:42:44.165897 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:42:44.165907 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:42:44.165918 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:42:44.165929 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:42:44.165939 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:42:44.165950 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:42:44.165961 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:42:44.165972 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:42:44.165983 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:42:44.165993 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:42:44.166004 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-17 00:42:44.166108 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-17 00:42:44.166220 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-17 00:42:44.166241 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-17 00:42:44.166268 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:42:44.166289 | orchestrator | 2026-03-17 00:42:44.166309 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-17 00:42:44.166356 | orchestrator | Tuesday 17 March 2026 00:42:35 +0000 (0:00:00.670) 0:00:41.821 ********* 2026-03-17 00:42:44.166377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:42:44.166393 | orchestrator | 2026-03-17 00:42:44.166404 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-17 00:42:44.166415 | orchestrator | Tuesday 17 March 2026 00:42:36 +0000 (0:00:01.056) 0:00:42.877 ********* 2026-03-17 00:42:44.166425 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:42:44.166443 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:42:44.166460 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:42:44.166477 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:42:44.166497 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:42:44.166516 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:42:44.166534 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:42:44.166550 | orchestrator | 2026-03-17 00:42:44.166561 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-17 00:42:44.166572 | orchestrator | Tuesday 17 March 2026 00:42:37 +0000 (0:00:00.630) 0:00:43.508 ********* 2026-03-17 00:42:44.166582 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:42:44.166593 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:42:44.166604 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:42:44.166614 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:42:44.166625 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:42:44.166636 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:42:44.166647 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:42:44.166660 | orchestrator | 2026-03-17 00:42:44.166695 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-17 00:42:44.166714 | orchestrator | Tuesday 17 March 2026 00:42:37 +0000 (0:00:00.542) 0:00:44.050 ********* 2026-03-17 00:42:44.166732 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:42:44.166749 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:42:44.166766 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:42:44.166784 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:42:44.166801 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:42:44.166817 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:42:44.166836 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:42:44.166855 | orchestrator | 2026-03-17 00:42:44.166873 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-17 00:42:44.166890 | orchestrator | Tuesday 17 March 2026 00:42:38 +0000 (0:00:00.629) 0:00:44.679 ********* 2026-03-17 00:42:44.166907 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:44.166935 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:44.166953 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:44.166970 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:44.166987 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:44.167004 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:44.167023 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:44.167041 | orchestrator | 2026-03-17 00:42:44.167059 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-17 00:42:44.167077 | orchestrator | Tuesday 17 March 2026 00:42:39 +0000 (0:00:01.541) 0:00:46.221 ********* 2026-03-17 00:42:44.167096 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:44.167115 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:44.167168 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:44.167187 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:44.167206 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:44.167217 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:44.167228 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:44.167238 | orchestrator | 2026-03-17 00:42:44.167249 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-17 00:42:44.167260 | orchestrator | Tuesday 17 March 2026 00:42:40 +0000 (0:00:01.002) 0:00:47.224 ********* 2026-03-17 00:42:44.167271 | orchestrator | ok: [testbed-manager] 2026-03-17 00:42:44.167281 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:42:44.167292 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:42:44.167302 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:42:44.167313 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:42:44.167323 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:42:44.167334 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:42:44.167345 | orchestrator | 2026-03-17 00:42:44.167356 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-17 00:42:44.167375 | orchestrator | Tuesday 17 March 2026 00:42:42 +0000 (0:00:02.118) 0:00:49.342 ********* 2026-03-17 00:42:44.167394 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:42:44.167412 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:42:44.167430 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:42:44.167447 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:42:44.167466 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:42:44.167484 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:42:44.167505 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:42:44.167524 | orchestrator | 2026-03-17 00:42:44.167542 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-17 00:42:44.167561 | orchestrator | Tuesday 17 March 2026 00:42:43 +0000 (0:00:00.608) 0:00:49.951 ********* 2026-03-17 00:42:44.167580 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:42:44.167597 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:42:44.167616 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:42:44.167659 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:42:44.167678 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:42:44.167696 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:42:44.167732 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:42:44.167751 | orchestrator | 2026-03-17 00:42:44.167769 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:42:44.167787 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 00:42:44.167807 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:42:44.167845 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:42:44.342486 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:42:44.342577 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:42:44.342591 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:42:44.342603 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 00:42:44.342615 | orchestrator | 2026-03-17 00:42:44.342627 | orchestrator | 2026-03-17 00:42:44.342638 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:42:44.342650 | orchestrator | Tuesday 17 March 2026 00:42:44 +0000 (0:00:00.595) 0:00:50.546 ********* 2026-03-17 00:42:44.342661 | orchestrator | =============================================================================== 2026-03-17 00:42:44.342672 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.36s 2026-03-17 00:42:44.342683 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.16s 2026-03-17 00:42:44.342694 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.90s 2026-03-17 00:42:44.342705 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.35s 2026-03-17 00:42:44.342716 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.41s 2026-03-17 00:42:44.342727 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.12s 2026-03-17 00:42:44.342738 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2026-03-17 00:42:44.342749 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.95s 2026-03-17 00:42:44.342760 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.66s 2026-03-17 00:42:44.342770 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2026-03-17 00:42:44.342800 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.54s 2026-03-17 00:42:44.342812 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.44s 2026-03-17 00:42:44.342841 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.21s 2026-03-17 00:42:44.342852 | orchestrator | osism.commons.network : Create required directories --------------------- 1.13s 2026-03-17 00:42:44.342863 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.06s 2026-03-17 00:42:44.342885 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.04s 2026-03-17 00:42:44.342896 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.04s 2026-03-17 00:42:44.342907 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2026-03-17 00:42:44.342917 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 1.00s 2026-03-17 00:42:44.342928 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.98s 2026-03-17 00:42:44.475893 | orchestrator | + osism apply wireguard 2026-03-17 00:42:55.857442 | orchestrator | 2026-03-17 00:42:55 | INFO  | Prepare task for execution of wireguard. 2026-03-17 00:42:55.923635 | orchestrator | 2026-03-17 00:42:55 | INFO  | Task d9665ea2-6871-44cb-b215-84f6e31711d2 (wireguard) was prepared for execution. 2026-03-17 00:42:55.923716 | orchestrator | 2026-03-17 00:42:55 | INFO  | It takes a moment until task d9665ea2-6871-44cb-b215-84f6e31711d2 (wireguard) has been started and output is visible here. 2026-03-17 00:43:12.858368 | orchestrator | 2026-03-17 00:43:12.858464 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-17 00:43:12.858481 | orchestrator | 2026-03-17 00:43:12.858495 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-17 00:43:12.858507 | orchestrator | Tuesday 17 March 2026 00:42:59 +0000 (0:00:00.265) 0:00:00.265 ********* 2026-03-17 00:43:12.858520 | orchestrator | ok: [testbed-manager] 2026-03-17 00:43:12.858533 | orchestrator | 2026-03-17 00:43:12.858546 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-17 00:43:12.858559 | orchestrator | Tuesday 17 March 2026 00:43:00 +0000 (0:00:01.554) 0:00:01.819 ********* 2026-03-17 00:43:12.858571 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:12.858584 | orchestrator | 2026-03-17 00:43:12.858597 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-17 00:43:12.858610 | orchestrator | Tuesday 17 March 2026 00:43:06 +0000 (0:00:05.214) 0:00:07.034 ********* 2026-03-17 00:43:12.858622 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:12.858635 | orchestrator | 2026-03-17 00:43:12.858647 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-17 00:43:12.858660 | orchestrator | Tuesday 17 March 2026 00:43:06 +0000 (0:00:00.488) 0:00:07.522 ********* 2026-03-17 00:43:12.858672 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:12.858685 | orchestrator | 2026-03-17 00:43:12.858697 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-17 00:43:12.858710 | orchestrator | Tuesday 17 March 2026 00:43:07 +0000 (0:00:00.382) 0:00:07.905 ********* 2026-03-17 00:43:12.858722 | orchestrator | ok: [testbed-manager] 2026-03-17 00:43:12.858735 | orchestrator | 2026-03-17 00:43:12.858747 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-17 00:43:12.858760 | orchestrator | Tuesday 17 March 2026 00:43:07 +0000 (0:00:00.478) 0:00:08.383 ********* 2026-03-17 00:43:12.858772 | orchestrator | ok: [testbed-manager] 2026-03-17 00:43:12.858785 | orchestrator | 2026-03-17 00:43:12.858797 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-17 00:43:12.858809 | orchestrator | Tuesday 17 March 2026 00:43:07 +0000 (0:00:00.362) 0:00:08.745 ********* 2026-03-17 00:43:12.858821 | orchestrator | ok: [testbed-manager] 2026-03-17 00:43:12.858833 | orchestrator | 2026-03-17 00:43:12.858846 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-17 00:43:12.858859 | orchestrator | Tuesday 17 March 2026 00:43:08 +0000 (0:00:00.374) 0:00:09.120 ********* 2026-03-17 00:43:12.858871 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:12.858884 | orchestrator | 2026-03-17 00:43:12.858896 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-17 00:43:12.858908 | orchestrator | Tuesday 17 March 2026 00:43:09 +0000 (0:00:01.011) 0:00:10.131 ********* 2026-03-17 00:43:12.858922 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-17 00:43:12.858937 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:12.858949 | orchestrator | 2026-03-17 00:43:12.858960 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-17 00:43:12.858972 | orchestrator | Tuesday 17 March 2026 00:43:10 +0000 (0:00:00.824) 0:00:10.956 ********* 2026-03-17 00:43:12.858987 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:12.859002 | orchestrator | 2026-03-17 00:43:12.859016 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-17 00:43:12.859059 | orchestrator | Tuesday 17 March 2026 00:43:11 +0000 (0:00:01.713) 0:00:12.669 ********* 2026-03-17 00:43:12.859074 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:12.859090 | orchestrator | 2026-03-17 00:43:12.859123 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:43:12.859135 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:43:12.859148 | orchestrator | 2026-03-17 00:43:12.859163 | orchestrator | 2026-03-17 00:43:12.859178 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:43:12.859190 | orchestrator | Tuesday 17 March 2026 00:43:12 +0000 (0:00:00.874) 0:00:13.543 ********* 2026-03-17 00:43:12.859202 | orchestrator | =============================================================================== 2026-03-17 00:43:12.859217 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.21s 2026-03-17 00:43:12.859243 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.71s 2026-03-17 00:43:12.859258 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2026-03-17 00:43:12.859274 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.01s 2026-03-17 00:43:12.859287 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.87s 2026-03-17 00:43:12.859317 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.82s 2026-03-17 00:43:12.859330 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2026-03-17 00:43:12.859342 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.48s 2026-03-17 00:43:12.859355 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2026-03-17 00:43:12.859367 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.37s 2026-03-17 00:43:12.859379 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.36s 2026-03-17 00:43:12.977375 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-17 00:43:13.013705 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-17 00:43:13.013783 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-17 00:43:13.090368 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 182 0 --:--:-- --:--:-- --:--:-- 184 2026-03-17 00:43:13.103804 | orchestrator | + osism apply --environment custom workarounds 2026-03-17 00:43:14.234342 | orchestrator | 2026-03-17 00:43:14 | INFO  | Trying to run play workarounds in environment custom 2026-03-17 00:43:24.373636 | orchestrator | 2026-03-17 00:43:24 | INFO  | Prepare task for execution of workarounds. 2026-03-17 00:43:24.445659 | orchestrator | 2026-03-17 00:43:24 | INFO  | Task 697e2990-01bd-475d-b948-aa0b9ff82369 (workarounds) was prepared for execution. 2026-03-17 00:43:24.445768 | orchestrator | 2026-03-17 00:43:24 | INFO  | It takes a moment until task 697e2990-01bd-475d-b948-aa0b9ff82369 (workarounds) has been started and output is visible here. 2026-03-17 00:43:47.517872 | orchestrator | 2026-03-17 00:43:47.517982 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:43:47.518000 | orchestrator | 2026-03-17 00:43:47.518108 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-17 00:43:47.518124 | orchestrator | Tuesday 17 March 2026 00:43:27 +0000 (0:00:00.161) 0:00:00.161 ********* 2026-03-17 00:43:47.518136 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-17 00:43:47.518148 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-17 00:43:47.518159 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-17 00:43:47.518171 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-17 00:43:47.518208 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-17 00:43:47.518220 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-17 00:43:47.518231 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-17 00:43:47.518242 | orchestrator | 2026-03-17 00:43:47.518252 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-17 00:43:47.518263 | orchestrator | 2026-03-17 00:43:47.518274 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-17 00:43:47.518285 | orchestrator | Tuesday 17 March 2026 00:43:27 +0000 (0:00:00.520) 0:00:00.682 ********* 2026-03-17 00:43:47.518296 | orchestrator | ok: [testbed-manager] 2026-03-17 00:43:47.518308 | orchestrator | 2026-03-17 00:43:47.518319 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-17 00:43:47.518330 | orchestrator | 2026-03-17 00:43:47.518340 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-17 00:43:47.518351 | orchestrator | Tuesday 17 March 2026 00:43:30 +0000 (0:00:02.370) 0:00:03.052 ********* 2026-03-17 00:43:47.518363 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:43:47.518373 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:43:47.518384 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:43:47.518398 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:43:47.518410 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:43:47.518422 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:43:47.518434 | orchestrator | 2026-03-17 00:43:47.518448 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-17 00:43:47.518460 | orchestrator | 2026-03-17 00:43:47.518472 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-17 00:43:47.518484 | orchestrator | Tuesday 17 March 2026 00:43:32 +0000 (0:00:02.252) 0:00:05.305 ********* 2026-03-17 00:43:47.518497 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:43:47.518510 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:43:47.518523 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:43:47.518535 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:43:47.518548 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:43:47.518574 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-17 00:43:47.518588 | orchestrator | 2026-03-17 00:43:47.518601 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-17 00:43:47.518613 | orchestrator | Tuesday 17 March 2026 00:43:33 +0000 (0:00:01.298) 0:00:06.603 ********* 2026-03-17 00:43:47.518626 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:43:47.518639 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:43:47.518651 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:43:47.518663 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:43:47.518673 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:43:47.518684 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:43:47.518695 | orchestrator | 2026-03-17 00:43:47.518706 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-17 00:43:47.518716 | orchestrator | Tuesday 17 March 2026 00:43:37 +0000 (0:00:03.786) 0:00:10.390 ********* 2026-03-17 00:43:47.518727 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:43:47.518738 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:43:47.518749 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:43:47.518760 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:43:47.518770 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:43:47.518781 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:43:47.518800 | orchestrator | 2026-03-17 00:43:47.518811 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-17 00:43:47.518822 | orchestrator | 2026-03-17 00:43:47.518833 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-17 00:43:47.518844 | orchestrator | Tuesday 17 March 2026 00:43:37 +0000 (0:00:00.473) 0:00:10.863 ********* 2026-03-17 00:43:47.518855 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:47.518865 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:43:47.518876 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:43:47.518887 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:43:47.518898 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:43:47.518912 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:43:47.518930 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:43:47.518949 | orchestrator | 2026-03-17 00:43:47.518968 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-17 00:43:47.518987 | orchestrator | Tuesday 17 March 2026 00:43:39 +0000 (0:00:01.629) 0:00:12.493 ********* 2026-03-17 00:43:47.519007 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:47.519028 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:43:47.519080 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:43:47.519102 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:43:47.519119 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:43:47.519136 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:43:47.519178 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:43:47.519197 | orchestrator | 2026-03-17 00:43:47.519216 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-17 00:43:47.519233 | orchestrator | Tuesday 17 March 2026 00:43:40 +0000 (0:00:01.393) 0:00:13.886 ********* 2026-03-17 00:43:47.519253 | orchestrator | ok: [testbed-manager] 2026-03-17 00:43:47.519271 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:43:47.519289 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:43:47.519303 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:43:47.519313 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:43:47.519324 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:43:47.519334 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:43:47.519345 | orchestrator | 2026-03-17 00:43:47.519356 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-17 00:43:47.519367 | orchestrator | Tuesday 17 March 2026 00:43:42 +0000 (0:00:01.515) 0:00:15.401 ********* 2026-03-17 00:43:47.519377 | orchestrator | changed: [testbed-manager] 2026-03-17 00:43:47.519388 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:43:47.519399 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:43:47.519409 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:43:47.519420 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:43:47.519430 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:43:47.519440 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:43:47.519451 | orchestrator | 2026-03-17 00:43:47.519461 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-17 00:43:47.519472 | orchestrator | Tuesday 17 March 2026 00:43:44 +0000 (0:00:01.525) 0:00:16.927 ********* 2026-03-17 00:43:47.519483 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:43:47.519493 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:43:47.519504 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:43:47.519514 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:43:47.519525 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:43:47.519535 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:43:47.519545 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:43:47.519556 | orchestrator | 2026-03-17 00:43:47.519567 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-17 00:43:47.519577 | orchestrator | 2026-03-17 00:43:47.519590 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-17 00:43:47.519609 | orchestrator | Tuesday 17 March 2026 00:43:44 +0000 (0:00:00.752) 0:00:17.680 ********* 2026-03-17 00:43:47.519626 | orchestrator | ok: [testbed-manager] 2026-03-17 00:43:47.519658 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:43:47.519674 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:43:47.519690 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:43:47.519708 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:43:47.519727 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:43:47.519745 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:43:47.519764 | orchestrator | 2026-03-17 00:43:47.519782 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:43:47.519803 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:43:47.519824 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:43:47.519842 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:43:47.519876 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:43:47.519901 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:43:47.519919 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:43:47.519937 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:43:47.519954 | orchestrator | 2026-03-17 00:43:47.519974 | orchestrator | 2026-03-17 00:43:47.519991 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:43:47.520009 | orchestrator | Tuesday 17 March 2026 00:43:47 +0000 (0:00:02.709) 0:00:20.389 ********* 2026-03-17 00:43:47.520021 | orchestrator | =============================================================================== 2026-03-17 00:43:47.520032 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.79s 2026-03-17 00:43:47.520042 | orchestrator | Install python3-docker -------------------------------------------------- 2.71s 2026-03-17 00:43:47.520053 | orchestrator | Apply netplan configuration --------------------------------------------- 2.37s 2026-03-17 00:43:47.520103 | orchestrator | Apply netplan configuration --------------------------------------------- 2.25s 2026-03-17 00:43:47.520114 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.63s 2026-03-17 00:43:47.520125 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.53s 2026-03-17 00:43:47.520136 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2026-03-17 00:43:47.520147 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.39s 2026-03-17 00:43:47.520158 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.30s 2026-03-17 00:43:47.520168 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.75s 2026-03-17 00:43:47.520179 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.52s 2026-03-17 00:43:47.520202 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.47s 2026-03-17 00:43:47.845657 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-17 00:43:58.992497 | orchestrator | 2026-03-17 00:43:58 | INFO  | Prepare task for execution of reboot. 2026-03-17 00:43:59.071159 | orchestrator | 2026-03-17 00:43:59 | INFO  | Task a276b1e5-6032-48bb-9c0b-6596650ae511 (reboot) was prepared for execution. 2026-03-17 00:43:59.071271 | orchestrator | 2026-03-17 00:43:59 | INFO  | It takes a moment until task a276b1e5-6032-48bb-9c0b-6596650ae511 (reboot) has been started and output is visible here. 2026-03-17 00:44:09.832486 | orchestrator | 2026-03-17 00:44:09.832598 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:44:09.832631 | orchestrator | 2026-03-17 00:44:09.832655 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:44:09.832667 | orchestrator | Tuesday 17 March 2026 00:44:02 +0000 (0:00:00.225) 0:00:00.225 ********* 2026-03-17 00:44:09.832679 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:44:09.832691 | orchestrator | 2026-03-17 00:44:09.832702 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:44:09.832713 | orchestrator | Tuesday 17 March 2026 00:44:02 +0000 (0:00:00.113) 0:00:00.339 ********* 2026-03-17 00:44:09.832724 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:44:09.832735 | orchestrator | 2026-03-17 00:44:09.832746 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:44:09.832757 | orchestrator | Tuesday 17 March 2026 00:44:03 +0000 (0:00:01.236) 0:00:01.576 ********* 2026-03-17 00:44:09.832768 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:44:09.832779 | orchestrator | 2026-03-17 00:44:09.832792 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:44:09.832810 | orchestrator | 2026-03-17 00:44:09.832829 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:44:09.832847 | orchestrator | Tuesday 17 March 2026 00:44:03 +0000 (0:00:00.089) 0:00:01.665 ********* 2026-03-17 00:44:09.832875 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:44:09.832895 | orchestrator | 2026-03-17 00:44:09.832913 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:44:09.832930 | orchestrator | Tuesday 17 March 2026 00:44:03 +0000 (0:00:00.079) 0:00:01.745 ********* 2026-03-17 00:44:09.832949 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:44:09.832967 | orchestrator | 2026-03-17 00:44:09.832987 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:44:09.833006 | orchestrator | Tuesday 17 March 2026 00:44:04 +0000 (0:00:01.030) 0:00:02.775 ********* 2026-03-17 00:44:09.833026 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:44:09.833075 | orchestrator | 2026-03-17 00:44:09.833093 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:44:09.833113 | orchestrator | 2026-03-17 00:44:09.833133 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:44:09.833152 | orchestrator | Tuesday 17 March 2026 00:44:04 +0000 (0:00:00.105) 0:00:02.881 ********* 2026-03-17 00:44:09.833171 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:44:09.833192 | orchestrator | 2026-03-17 00:44:09.833212 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:44:09.833253 | orchestrator | Tuesday 17 March 2026 00:44:04 +0000 (0:00:00.086) 0:00:02.968 ********* 2026-03-17 00:44:09.833274 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:44:09.833293 | orchestrator | 2026-03-17 00:44:09.833313 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:44:09.833333 | orchestrator | Tuesday 17 March 2026 00:44:05 +0000 (0:00:00.990) 0:00:03.959 ********* 2026-03-17 00:44:09.833352 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:44:09.833371 | orchestrator | 2026-03-17 00:44:09.833390 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:44:09.833410 | orchestrator | 2026-03-17 00:44:09.833428 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:44:09.833448 | orchestrator | Tuesday 17 March 2026 00:44:05 +0000 (0:00:00.104) 0:00:04.063 ********* 2026-03-17 00:44:09.833467 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:09.833486 | orchestrator | 2026-03-17 00:44:09.833507 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:44:09.833526 | orchestrator | Tuesday 17 March 2026 00:44:06 +0000 (0:00:00.096) 0:00:04.160 ********* 2026-03-17 00:44:09.833545 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:44:09.833594 | orchestrator | 2026-03-17 00:44:09.833614 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:44:09.833633 | orchestrator | Tuesday 17 March 2026 00:44:07 +0000 (0:00:01.030) 0:00:05.190 ********* 2026-03-17 00:44:09.833651 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:44:09.833670 | orchestrator | 2026-03-17 00:44:09.833689 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:44:09.833707 | orchestrator | 2026-03-17 00:44:09.833724 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:44:09.833741 | orchestrator | Tuesday 17 March 2026 00:44:07 +0000 (0:00:00.100) 0:00:05.291 ********* 2026-03-17 00:44:09.833760 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:09.833778 | orchestrator | 2026-03-17 00:44:09.833797 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:44:09.833815 | orchestrator | Tuesday 17 March 2026 00:44:07 +0000 (0:00:00.097) 0:00:05.388 ********* 2026-03-17 00:44:09.833833 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:44:09.833850 | orchestrator | 2026-03-17 00:44:09.833867 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:44:09.833886 | orchestrator | Tuesday 17 March 2026 00:44:08 +0000 (0:00:01.123) 0:00:06.511 ********* 2026-03-17 00:44:09.833905 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:44:09.833923 | orchestrator | 2026-03-17 00:44:09.833940 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-17 00:44:09.833957 | orchestrator | 2026-03-17 00:44:09.833975 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-17 00:44:09.833993 | orchestrator | Tuesday 17 March 2026 00:44:08 +0000 (0:00:00.095) 0:00:06.607 ********* 2026-03-17 00:44:09.834012 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:09.834136 | orchestrator | 2026-03-17 00:44:09.834156 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-17 00:44:09.834173 | orchestrator | Tuesday 17 March 2026 00:44:08 +0000 (0:00:00.083) 0:00:06.691 ********* 2026-03-17 00:44:09.834190 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:44:09.834209 | orchestrator | 2026-03-17 00:44:09.834228 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-17 00:44:09.834245 | orchestrator | Tuesday 17 March 2026 00:44:09 +0000 (0:00:01.019) 0:00:07.710 ********* 2026-03-17 00:44:09.834296 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:44:09.834316 | orchestrator | 2026-03-17 00:44:09.834336 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:44:09.834358 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:09.834379 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:09.834399 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:09.834419 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:09.834439 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:09.834459 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:44:09.834478 | orchestrator | 2026-03-17 00:44:09.834498 | orchestrator | 2026-03-17 00:44:09.834518 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:44:09.834538 | orchestrator | Tuesday 17 March 2026 00:44:09 +0000 (0:00:00.032) 0:00:07.742 ********* 2026-03-17 00:44:09.834557 | orchestrator | =============================================================================== 2026-03-17 00:44:09.834598 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.43s 2026-03-17 00:44:09.834618 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.56s 2026-03-17 00:44:09.834637 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.53s 2026-03-17 00:44:09.963680 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-17 00:44:21.114360 | orchestrator | 2026-03-17 00:44:21 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-17 00:44:21.180481 | orchestrator | 2026-03-17 00:44:21 | INFO  | Task 08f40c8e-a750-46fd-b05e-9add074fa1ef (wait-for-connection) was prepared for execution. 2026-03-17 00:44:21.180583 | orchestrator | 2026-03-17 00:44:21 | INFO  | It takes a moment until task 08f40c8e-a750-46fd-b05e-9add074fa1ef (wait-for-connection) has been started and output is visible here. 2026-03-17 00:44:35.660047 | orchestrator | 2026-03-17 00:44:35.660143 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-17 00:44:35.660157 | orchestrator | 2026-03-17 00:44:35.660168 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-17 00:44:35.660178 | orchestrator | Tuesday 17 March 2026 00:44:23 +0000 (0:00:00.231) 0:00:00.231 ********* 2026-03-17 00:44:35.660187 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:44:35.660198 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:44:35.660206 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:44:35.660215 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:44:35.660224 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:44:35.660233 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:44:35.660241 | orchestrator | 2026-03-17 00:44:35.660250 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:44:35.660260 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:44:35.660270 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:44:35.660279 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:44:35.660287 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:44:35.660296 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:44:35.660305 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:44:35.660313 | orchestrator | 2026-03-17 00:44:35.660322 | orchestrator | 2026-03-17 00:44:35.660331 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:44:35.660340 | orchestrator | Tuesday 17 March 2026 00:44:35 +0000 (0:00:11.436) 0:00:11.668 ********* 2026-03-17 00:44:35.660349 | orchestrator | =============================================================================== 2026-03-17 00:44:35.660357 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.44s 2026-03-17 00:44:35.846100 | orchestrator | + osism apply hddtemp 2026-03-17 00:44:47.171753 | orchestrator | 2026-03-17 00:44:47 | INFO  | Prepare task for execution of hddtemp. 2026-03-17 00:44:47.239776 | orchestrator | 2026-03-17 00:44:47 | INFO  | Task f164e803-0874-4c9b-9a3f-3b61f0c9b8c7 (hddtemp) was prepared for execution. 2026-03-17 00:44:47.239859 | orchestrator | 2026-03-17 00:44:47 | INFO  | It takes a moment until task f164e803-0874-4c9b-9a3f-3b61f0c9b8c7 (hddtemp) has been started and output is visible here. 2026-03-17 00:45:13.985135 | orchestrator | 2026-03-17 00:45:13.985233 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-17 00:45:13.985266 | orchestrator | 2026-03-17 00:45:13.985276 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-17 00:45:13.985285 | orchestrator | Tuesday 17 March 2026 00:44:50 +0000 (0:00:00.297) 0:00:00.297 ********* 2026-03-17 00:45:13.985293 | orchestrator | ok: [testbed-manager] 2026-03-17 00:45:13.985302 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:45:13.985311 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:45:13.985319 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:45:13.985327 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:45:13.985336 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:45:13.985343 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:13.985351 | orchestrator | 2026-03-17 00:45:13.985360 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-17 00:45:13.985368 | orchestrator | Tuesday 17 March 2026 00:44:51 +0000 (0:00:00.561) 0:00:00.859 ********* 2026-03-17 00:45:13.985378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:45:13.985387 | orchestrator | 2026-03-17 00:45:13.985396 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-17 00:45:13.985404 | orchestrator | Tuesday 17 March 2026 00:44:52 +0000 (0:00:01.022) 0:00:01.881 ********* 2026-03-17 00:45:13.985412 | orchestrator | ok: [testbed-manager] 2026-03-17 00:45:13.985420 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:45:13.985428 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:45:13.985436 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:45:13.985444 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:45:13.985452 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:45:13.985459 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:13.985467 | orchestrator | 2026-03-17 00:45:13.985475 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-17 00:45:13.985483 | orchestrator | Tuesday 17 March 2026 00:44:54 +0000 (0:00:02.477) 0:00:04.359 ********* 2026-03-17 00:45:13.985491 | orchestrator | changed: [testbed-manager] 2026-03-17 00:45:13.985500 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:45:13.985508 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:45:13.985516 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:45:13.985523 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:45:13.985531 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:45:13.985539 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:45:13.985547 | orchestrator | 2026-03-17 00:45:13.985567 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-17 00:45:13.985575 | orchestrator | Tuesday 17 March 2026 00:44:55 +0000 (0:00:00.905) 0:00:05.265 ********* 2026-03-17 00:45:13.985583 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:45:13.985591 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:45:13.985599 | orchestrator | ok: [testbed-manager] 2026-03-17 00:45:13.985607 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:45:13.985615 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:45:13.985623 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:45:13.985631 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:45:13.985639 | orchestrator | 2026-03-17 00:45:13.985647 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-17 00:45:13.985655 | orchestrator | Tuesday 17 March 2026 00:44:56 +0000 (0:00:01.242) 0:00:06.507 ********* 2026-03-17 00:45:13.985663 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:45:13.985671 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:45:13.985679 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:45:13.985686 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:45:13.985694 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:45:13.985702 | orchestrator | changed: [testbed-manager] 2026-03-17 00:45:13.985710 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:45:13.985718 | orchestrator | 2026-03-17 00:45:13.985726 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-17 00:45:13.985741 | orchestrator | Tuesday 17 March 2026 00:44:57 +0000 (0:00:00.530) 0:00:07.038 ********* 2026-03-17 00:45:13.985749 | orchestrator | changed: [testbed-manager] 2026-03-17 00:45:13.985757 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:45:13.985765 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:45:13.985773 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:45:13.985781 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:45:13.985789 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:45:13.985796 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:45:13.985810 | orchestrator | 2026-03-17 00:45:13.985823 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-17 00:45:13.985837 | orchestrator | Tuesday 17 March 2026 00:45:10 +0000 (0:00:13.183) 0:00:20.221 ********* 2026-03-17 00:45:13.985849 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:45:13.985869 | orchestrator | 2026-03-17 00:45:13.985882 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-17 00:45:13.985894 | orchestrator | Tuesday 17 March 2026 00:45:11 +0000 (0:00:01.269) 0:00:21.491 ********* 2026-03-17 00:45:13.985907 | orchestrator | changed: [testbed-manager] 2026-03-17 00:45:13.985919 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:45:13.985931 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:45:13.985944 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:45:13.985955 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:45:13.985998 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:45:13.986013 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:45:13.986099 | orchestrator | 2026-03-17 00:45:13.986114 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:45:13.986130 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:45:13.986169 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:45:13.986185 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:45:13.986201 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:45:13.986211 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:45:13.986220 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:45:13.986229 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:45:13.986237 | orchestrator | 2026-03-17 00:45:13.986246 | orchestrator | 2026-03-17 00:45:13.986255 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:45:13.986264 | orchestrator | Tuesday 17 March 2026 00:45:13 +0000 (0:00:02.018) 0:00:23.510 ********* 2026-03-17 00:45:13.986273 | orchestrator | =============================================================================== 2026-03-17 00:45:13.986282 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.18s 2026-03-17 00:45:13.986290 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.48s 2026-03-17 00:45:13.986299 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.02s 2026-03-17 00:45:13.986307 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.27s 2026-03-17 00:45:13.986327 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.24s 2026-03-17 00:45:13.986336 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.02s 2026-03-17 00:45:13.986344 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.91s 2026-03-17 00:45:13.986361 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.56s 2026-03-17 00:45:13.986369 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.53s 2026-03-17 00:45:14.151512 | orchestrator | ++ semver latest 7.1.1 2026-03-17 00:45:14.196768 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:45:14.196849 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:45:14.196860 | orchestrator | + sudo systemctl restart manager.service 2026-03-17 00:45:27.586305 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-17 00:45:27.586423 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-17 00:45:27.586440 | orchestrator | + local max_attempts=60 2026-03-17 00:45:27.586453 | orchestrator | + local name=ceph-ansible 2026-03-17 00:45:27.586465 | orchestrator | + local attempt_num=1 2026-03-17 00:45:27.586477 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:45:27.617415 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:45:27.617505 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:45:27.617518 | orchestrator | + sleep 5 2026-03-17 00:45:32.622343 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:45:32.672938 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:45:32.673047 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:45:32.673057 | orchestrator | + sleep 5 2026-03-17 00:45:37.675941 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:45:37.714927 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:45:37.715003 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:45:37.715010 | orchestrator | + sleep 5 2026-03-17 00:45:42.719341 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:45:42.756650 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:45:42.756751 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:45:42.756766 | orchestrator | + sleep 5 2026-03-17 00:45:47.759879 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:45:47.794233 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:45:47.794322 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:45:47.794345 | orchestrator | + sleep 5 2026-03-17 00:45:52.799884 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:45:52.837473 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:45:52.837543 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:45:52.837550 | orchestrator | + sleep 5 2026-03-17 00:45:57.841594 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:45:57.877271 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:45:57.877343 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:45:57.877350 | orchestrator | + sleep 5 2026-03-17 00:46:02.881245 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:46:02.915696 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:46:02.915787 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:46:02.915797 | orchestrator | + sleep 5 2026-03-17 00:46:07.921597 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:46:07.955722 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:46:07.955807 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:46:07.955816 | orchestrator | + sleep 5 2026-03-17 00:46:12.960435 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:46:12.997068 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:46:12.997215 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:46:12.997233 | orchestrator | + sleep 5 2026-03-17 00:46:18.001766 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:46:18.042377 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:46:18.042466 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:46:18.042494 | orchestrator | + sleep 5 2026-03-17 00:46:23.048413 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:46:23.084142 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:46:23.084315 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:46:23.084335 | orchestrator | + sleep 5 2026-03-17 00:46:28.088137 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:46:28.124998 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-17 00:46:28.125103 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-17 00:46:28.125120 | orchestrator | + sleep 5 2026-03-17 00:46:33.129122 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-17 00:46:33.167273 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:46:33.167376 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-17 00:46:33.167388 | orchestrator | + local max_attempts=60 2026-03-17 00:46:33.167397 | orchestrator | + local name=kolla-ansible 2026-03-17 00:46:33.167403 | orchestrator | + local attempt_num=1 2026-03-17 00:46:33.168120 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-17 00:46:33.201810 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:46:33.201907 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-17 00:46:33.201922 | orchestrator | + local max_attempts=60 2026-03-17 00:46:33.201934 | orchestrator | + local name=osism-ansible 2026-03-17 00:46:33.201946 | orchestrator | + local attempt_num=1 2026-03-17 00:46:33.202355 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-17 00:46:33.227466 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-17 00:46:33.227556 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-17 00:46:33.227569 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-17 00:46:33.396959 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-17 00:46:33.536063 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-17 00:46:33.704559 | orchestrator | ARA in osism-ansible already disabled. 2026-03-17 00:46:33.868050 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-17 00:46:33.868180 | orchestrator | + osism apply gather-facts 2026-03-17 00:46:45.229502 | orchestrator | 2026-03-17 00:46:45 | INFO  | Prepare task for execution of gather-facts. 2026-03-17 00:46:45.306602 | orchestrator | 2026-03-17 00:46:45 | INFO  | Task 42f4543d-95b0-4562-803b-e32e689e89de (gather-facts) was prepared for execution. 2026-03-17 00:46:45.306695 | orchestrator | 2026-03-17 00:46:45 | INFO  | It takes a moment until task 42f4543d-95b0-4562-803b-e32e689e89de (gather-facts) has been started and output is visible here. 2026-03-17 00:46:55.349068 | orchestrator | 2026-03-17 00:46:55.349170 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:46:55.349186 | orchestrator | 2026-03-17 00:46:55.349199 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:46:55.349210 | orchestrator | Tuesday 17 March 2026 00:46:48 +0000 (0:00:00.290) 0:00:00.290 ********* 2026-03-17 00:46:55.349222 | orchestrator | ok: [testbed-manager] 2026-03-17 00:46:55.349234 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:46:55.349245 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:46:55.349255 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:46:55.349266 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:46:55.349314 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:46:55.349328 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:46:55.349339 | orchestrator | 2026-03-17 00:46:55.349349 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 00:46:55.349360 | orchestrator | 2026-03-17 00:46:55.349371 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 00:46:55.349382 | orchestrator | Tuesday 17 March 2026 00:46:54 +0000 (0:00:05.809) 0:00:06.100 ********* 2026-03-17 00:46:55.349393 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:46:55.349405 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:46:55.349416 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:46:55.349427 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:46:55.349437 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:46:55.349478 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:46:55.349490 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:46:55.349501 | orchestrator | 2026-03-17 00:46:55.349511 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:46:55.349522 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:46:55.349535 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:46:55.349545 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:46:55.349556 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:46:55.349567 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:46:55.349577 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:46:55.349588 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:46:55.349599 | orchestrator | 2026-03-17 00:46:55.349610 | orchestrator | 2026-03-17 00:46:55.349623 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:46:55.349636 | orchestrator | Tuesday 17 March 2026 00:46:55 +0000 (0:00:00.664) 0:00:06.765 ********* 2026-03-17 00:46:55.349648 | orchestrator | =============================================================================== 2026-03-17 00:46:55.349678 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.81s 2026-03-17 00:46:55.349691 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.66s 2026-03-17 00:46:55.540202 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-17 00:46:55.559086 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-17 00:46:55.581485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-17 00:46:55.596927 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-17 00:46:55.610397 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-17 00:46:55.621487 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-17 00:46:55.632791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-17 00:46:55.642912 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-17 00:46:55.663061 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-17 00:46:55.675887 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-17 00:46:55.693001 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-17 00:46:55.708059 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-17 00:46:55.727024 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-17 00:46:55.747784 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-17 00:46:55.766088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-17 00:46:55.781660 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-17 00:46:55.793719 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-17 00:46:55.804687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-17 00:46:55.815534 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-17 00:46:55.826343 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-17 00:46:55.836997 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-17 00:46:55.847823 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-17 00:46:55.858330 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-17 00:46:55.870304 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-17 00:46:55.989046 | orchestrator | ok: Runtime: 0:23:59.282939 2026-03-17 00:46:56.100415 | 2026-03-17 00:46:56.100580 | TASK [Deploy services] 2026-03-17 00:46:56.633792 | orchestrator | skipping: Conditional result was False 2026-03-17 00:46:56.651350 | 2026-03-17 00:46:56.651559 | TASK [Deploy in a nutshell] 2026-03-17 00:46:57.374688 | orchestrator | + set -e 2026-03-17 00:46:57.374828 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-17 00:46:57.374842 | orchestrator | ++ export INTERACTIVE=false 2026-03-17 00:46:57.374853 | orchestrator | ++ INTERACTIVE=false 2026-03-17 00:46:57.374861 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-17 00:46:57.374868 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-17 00:46:57.374876 | orchestrator | + source /opt/manager-vars.sh 2026-03-17 00:46:57.374923 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-17 00:46:57.374940 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-17 00:46:57.374957 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-17 00:46:57.374967 | orchestrator | ++ CEPH_VERSION=reef 2026-03-17 00:46:57.374973 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-17 00:46:57.374983 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-17 00:46:57.374989 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-17 00:46:57.375000 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-17 00:46:57.375006 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-03-17 00:46:57.375015 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-03-17 00:46:57.375021 | orchestrator | ++ export ARA=false 2026-03-17 00:46:57.375027 | orchestrator | ++ ARA=false 2026-03-17 00:46:57.375032 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-17 00:46:57.375039 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-17 00:46:57.375044 | orchestrator | ++ export TEMPEST=true 2026-03-17 00:46:57.375050 | orchestrator | ++ TEMPEST=true 2026-03-17 00:46:57.375055 | orchestrator | ++ export IS_ZUUL=true 2026-03-17 00:46:57.375061 | orchestrator | ++ IS_ZUUL=true 2026-03-17 00:46:57.375066 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2026-03-17 00:46:57.375072 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2026-03-17 00:46:57.375078 | orchestrator | ++ export EXTERNAL_API=false 2026-03-17 00:46:57.375083 | orchestrator | ++ EXTERNAL_API=false 2026-03-17 00:46:57.375089 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-17 00:46:57.375094 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-17 00:46:57.375100 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-17 00:46:57.375108 | orchestrator | 2026-03-17 00:46:57.375115 | orchestrator | # PULL IMAGES 2026-03-17 00:46:57.375124 | orchestrator | 2026-03-17 00:46:57.375133 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-17 00:46:57.375142 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-17 00:46:57.375157 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-17 00:46:57.375166 | orchestrator | + echo 2026-03-17 00:46:57.375175 | orchestrator | + echo '# PULL IMAGES' 2026-03-17 00:46:57.375185 | orchestrator | + echo 2026-03-17 00:46:57.376399 | orchestrator | ++ semver latest 7.0.0 2026-03-17 00:46:57.433201 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-17 00:46:57.433372 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-17 00:46:57.433406 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-17 00:46:58.634593 | orchestrator | 2026-03-17 00:46:58 | INFO  | Trying to run play pull-images in environment custom 2026-03-17 00:47:08.673303 | orchestrator | 2026-03-17 00:47:08 | INFO  | Prepare task for execution of pull-images. 2026-03-17 00:47:08.763593 | orchestrator | 2026-03-17 00:47:08 | INFO  | Task c13faef0-3bc3-4da7-8b37-cf51309cfafb (pull-images) was prepared for execution. 2026-03-17 00:47:08.763716 | orchestrator | 2026-03-17 00:47:08 | INFO  | Task c13faef0-3bc3-4da7-8b37-cf51309cfafb is running in background. No more output. Check ARA for logs. 2026-03-17 00:47:10.290905 | orchestrator | 2026-03-17 00:47:10 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-17 00:47:20.375203 | orchestrator | 2026-03-17 00:47:20 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-17 00:47:20.520439 | orchestrator | 2026-03-17 00:47:20 | INFO  | Task 96fed92e-8beb-4a6f-897c-5690307a1798 (wipe-partitions) was prepared for execution. 2026-03-17 00:47:20.520549 | orchestrator | 2026-03-17 00:47:20 | INFO  | It takes a moment until task 96fed92e-8beb-4a6f-897c-5690307a1798 (wipe-partitions) has been started and output is visible here. 2026-03-17 00:47:32.184992 | orchestrator | 2026-03-17 00:47:32.185093 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-17 00:47:32.185106 | orchestrator | 2026-03-17 00:47:32.185114 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-17 00:47:32.185129 | orchestrator | Tuesday 17 March 2026 00:47:23 +0000 (0:00:00.145) 0:00:00.145 ********* 2026-03-17 00:47:32.185160 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:47:32.185170 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:47:32.185177 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:47:32.185185 | orchestrator | 2026-03-17 00:47:32.185192 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-17 00:47:32.185199 | orchestrator | Tuesday 17 March 2026 00:47:24 +0000 (0:00:01.005) 0:00:01.151 ********* 2026-03-17 00:47:32.185210 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:47:32.185218 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:32.185226 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:32.185233 | orchestrator | 2026-03-17 00:47:32.185240 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-17 00:47:32.185248 | orchestrator | Tuesday 17 March 2026 00:47:24 +0000 (0:00:00.259) 0:00:01.410 ********* 2026-03-17 00:47:32.185255 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:47:32.185263 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:47:32.185270 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:32.185277 | orchestrator | 2026-03-17 00:47:32.185285 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-17 00:47:32.185292 | orchestrator | Tuesday 17 March 2026 00:47:25 +0000 (0:00:00.605) 0:00:02.015 ********* 2026-03-17 00:47:32.185299 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:47:32.185307 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:32.185314 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:32.185321 | orchestrator | 2026-03-17 00:47:32.185329 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-17 00:47:32.185336 | orchestrator | Tuesday 17 March 2026 00:47:25 +0000 (0:00:00.242) 0:00:02.257 ********* 2026-03-17 00:47:32.185344 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-17 00:47:32.185354 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-17 00:47:32.185362 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-17 00:47:32.185369 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-17 00:47:32.185376 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-17 00:47:32.185383 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-17 00:47:32.185391 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-17 00:47:32.185425 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-17 00:47:32.185433 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-17 00:47:32.185441 | orchestrator | 2026-03-17 00:47:32.185448 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-17 00:47:32.185456 | orchestrator | Tuesday 17 March 2026 00:47:27 +0000 (0:00:01.468) 0:00:03.726 ********* 2026-03-17 00:47:32.185463 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-17 00:47:32.185471 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-17 00:47:32.185478 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-17 00:47:32.185485 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-17 00:47:32.185492 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-17 00:47:32.185499 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-17 00:47:32.185506 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-17 00:47:32.185513 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-17 00:47:32.185521 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-17 00:47:32.185528 | orchestrator | 2026-03-17 00:47:32.185541 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-17 00:47:32.185550 | orchestrator | Tuesday 17 March 2026 00:47:28 +0000 (0:00:01.386) 0:00:05.113 ********* 2026-03-17 00:47:32.185559 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-17 00:47:32.185567 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-17 00:47:32.185576 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-17 00:47:32.185584 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-17 00:47:32.185599 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-17 00:47:32.185608 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-17 00:47:32.185616 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-17 00:47:32.185624 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-17 00:47:32.185633 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-17 00:47:32.185641 | orchestrator | 2026-03-17 00:47:32.185650 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-17 00:47:32.185658 | orchestrator | Tuesday 17 March 2026 00:47:30 +0000 (0:00:02.126) 0:00:07.239 ********* 2026-03-17 00:47:32.185667 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:47:32.185675 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:47:32.185683 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:47:32.185691 | orchestrator | 2026-03-17 00:47:32.185700 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-17 00:47:32.185708 | orchestrator | Tuesday 17 March 2026 00:47:31 +0000 (0:00:00.588) 0:00:07.828 ********* 2026-03-17 00:47:32.185716 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:47:32.185725 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:47:32.185733 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:47:32.185742 | orchestrator | 2026-03-17 00:47:32.185749 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:47:32.185758 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:32.185767 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:32.185788 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:32.185796 | orchestrator | 2026-03-17 00:47:32.185804 | orchestrator | 2026-03-17 00:47:32.185811 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:47:32.185819 | orchestrator | Tuesday 17 March 2026 00:47:31 +0000 (0:00:00.820) 0:00:08.648 ********* 2026-03-17 00:47:32.185826 | orchestrator | =============================================================================== 2026-03-17 00:47:32.185833 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2026-03-17 00:47:32.185841 | orchestrator | Check device availability ----------------------------------------------- 1.47s 2026-03-17 00:47:32.185848 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.39s 2026-03-17 00:47:32.185855 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.01s 2026-03-17 00:47:32.185863 | orchestrator | Request device events from the kernel ----------------------------------- 0.82s 2026-03-17 00:47:32.185870 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.61s 2026-03-17 00:47:32.185877 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2026-03-17 00:47:32.185885 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2026-03-17 00:47:32.185892 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-03-17 00:47:43.624789 | orchestrator | 2026-03-17 00:47:43 | INFO  | Prepare task for execution of facts. 2026-03-17 00:47:43.695860 | orchestrator | 2026-03-17 00:47:43 | INFO  | Task fd1a00af-65ec-46ab-84b1-83da1becea76 (facts) was prepared for execution. 2026-03-17 00:47:43.695947 | orchestrator | 2026-03-17 00:47:43 | INFO  | It takes a moment until task fd1a00af-65ec-46ab-84b1-83da1becea76 (facts) has been started and output is visible here. 2026-03-17 00:47:54.416184 | orchestrator | 2026-03-17 00:47:54.416339 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-17 00:47:54.416351 | orchestrator | 2026-03-17 00:47:54.416377 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 00:47:54.416383 | orchestrator | Tuesday 17 March 2026 00:47:46 +0000 (0:00:00.289) 0:00:00.289 ********* 2026-03-17 00:47:54.416388 | orchestrator | ok: [testbed-manager] 2026-03-17 00:47:54.416395 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:47:54.416401 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:47:54.416406 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:47:54.416418 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:47:54.416423 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:47:54.416428 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:54.416433 | orchestrator | 2026-03-17 00:47:54.416439 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 00:47:54.416444 | orchestrator | Tuesday 17 March 2026 00:47:47 +0000 (0:00:01.214) 0:00:01.504 ********* 2026-03-17 00:47:54.416450 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:47:54.416456 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:47:54.416461 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:47:54.416527 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:47:54.416533 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:47:54.416539 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:54.416544 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:54.416549 | orchestrator | 2026-03-17 00:47:54.416589 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:47:54.416607 | orchestrator | 2026-03-17 00:47:54.416613 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:47:54.416620 | orchestrator | Tuesday 17 March 2026 00:47:48 +0000 (0:00:01.063) 0:00:02.567 ********* 2026-03-17 00:47:54.416625 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:47:54.416631 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:47:54.416636 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:47:54.416657 | orchestrator | ok: [testbed-manager] 2026-03-17 00:47:54.416663 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:47:54.416669 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:47:54.416674 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:47:54.416680 | orchestrator | 2026-03-17 00:47:54.416686 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 00:47:54.416692 | orchestrator | 2026-03-17 00:47:54.416698 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 00:47:54.416793 | orchestrator | Tuesday 17 March 2026 00:47:53 +0000 (0:00:04.646) 0:00:07.213 ********* 2026-03-17 00:47:54.416800 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:47:54.416807 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:47:54.416813 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:47:54.416819 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:47:54.416825 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:47:54.416831 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:47:54.416836 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:47:54.416842 | orchestrator | 2026-03-17 00:47:54.416848 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:47:54.416855 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:54.416866 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:54.416875 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:54.416883 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:54.416891 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:54.416924 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:54.416934 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:47:54.416944 | orchestrator | 2026-03-17 00:47:54.416954 | orchestrator | 2026-03-17 00:47:54.416964 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:47:54.416974 | orchestrator | Tuesday 17 March 2026 00:47:54 +0000 (0:00:00.515) 0:00:07.729 ********* 2026-03-17 00:47:54.416984 | orchestrator | =============================================================================== 2026-03-17 00:47:54.416994 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.65s 2026-03-17 00:47:54.417004 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.21s 2026-03-17 00:47:54.417014 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2026-03-17 00:47:54.417024 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-03-17 00:47:55.862903 | orchestrator | 2026-03-17 00:47:55 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-17 00:47:55.923770 | orchestrator | 2026-03-17 00:47:55 | INFO  | Task dcce1df5-8c20-4fe8-b4d5-8b34ca33880f (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-17 00:47:55.923841 | orchestrator | 2026-03-17 00:47:55 | INFO  | It takes a moment until task dcce1df5-8c20-4fe8-b4d5-8b34ca33880f (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-17 00:48:07.429238 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 00:48:07.429315 | orchestrator | 2.16.14 2026-03-17 00:48:07.429323 | orchestrator | 2026-03-17 00:48:07.429328 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-17 00:48:07.429335 | orchestrator | 2026-03-17 00:48:07.429342 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:48:07.429348 | orchestrator | Tuesday 17 March 2026 00:48:00 +0000 (0:00:00.287) 0:00:00.287 ********* 2026-03-17 00:48:07.429355 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 00:48:07.429361 | orchestrator | 2026-03-17 00:48:07.429367 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:48:07.429374 | orchestrator | Tuesday 17 March 2026 00:48:00 +0000 (0:00:00.228) 0:00:00.515 ********* 2026-03-17 00:48:07.429380 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:48:07.429387 | orchestrator | 2026-03-17 00:48:07.429393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429400 | orchestrator | Tuesday 17 March 2026 00:48:00 +0000 (0:00:00.212) 0:00:00.728 ********* 2026-03-17 00:48:07.429415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:48:07.429422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:48:07.429426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:48:07.429430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:48:07.429434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:48:07.429438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:48:07.429442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:48:07.429446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:48:07.429450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-17 00:48:07.429454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:48:07.429469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:48:07.429473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:48:07.429477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:48:07.429481 | orchestrator | 2026-03-17 00:48:07.429485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429489 | orchestrator | Tuesday 17 March 2026 00:48:01 +0000 (0:00:00.341) 0:00:01.070 ********* 2026-03-17 00:48:07.429492 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429496 | orchestrator | 2026-03-17 00:48:07.429537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429542 | orchestrator | Tuesday 17 March 2026 00:48:01 +0000 (0:00:00.439) 0:00:01.509 ********* 2026-03-17 00:48:07.429545 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429549 | orchestrator | 2026-03-17 00:48:07.429554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429564 | orchestrator | Tuesday 17 March 2026 00:48:01 +0000 (0:00:00.191) 0:00:01.701 ********* 2026-03-17 00:48:07.429570 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429576 | orchestrator | 2026-03-17 00:48:07.429582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429588 | orchestrator | Tuesday 17 March 2026 00:48:01 +0000 (0:00:00.190) 0:00:01.891 ********* 2026-03-17 00:48:07.429595 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429602 | orchestrator | 2026-03-17 00:48:07.429608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429615 | orchestrator | Tuesday 17 March 2026 00:48:02 +0000 (0:00:00.205) 0:00:02.096 ********* 2026-03-17 00:48:07.429621 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429628 | orchestrator | 2026-03-17 00:48:07.429632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429636 | orchestrator | Tuesday 17 March 2026 00:48:02 +0000 (0:00:00.175) 0:00:02.272 ********* 2026-03-17 00:48:07.429640 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429644 | orchestrator | 2026-03-17 00:48:07.429648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429651 | orchestrator | Tuesday 17 March 2026 00:48:02 +0000 (0:00:00.190) 0:00:02.462 ********* 2026-03-17 00:48:07.429655 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429659 | orchestrator | 2026-03-17 00:48:07.429663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429666 | orchestrator | Tuesday 17 March 2026 00:48:02 +0000 (0:00:00.206) 0:00:02.668 ********* 2026-03-17 00:48:07.429670 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429674 | orchestrator | 2026-03-17 00:48:07.429678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429682 | orchestrator | Tuesday 17 March 2026 00:48:02 +0000 (0:00:00.194) 0:00:02.862 ********* 2026-03-17 00:48:07.429686 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb) 2026-03-17 00:48:07.429693 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb) 2026-03-17 00:48:07.429699 | orchestrator | 2026-03-17 00:48:07.429705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429726 | orchestrator | Tuesday 17 March 2026 00:48:03 +0000 (0:00:00.396) 0:00:03.259 ********* 2026-03-17 00:48:07.429733 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1) 2026-03-17 00:48:07.429738 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1) 2026-03-17 00:48:07.429742 | orchestrator | 2026-03-17 00:48:07.429749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429758 | orchestrator | Tuesday 17 March 2026 00:48:03 +0000 (0:00:00.399) 0:00:03.658 ********* 2026-03-17 00:48:07.429762 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184) 2026-03-17 00:48:07.429766 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184) 2026-03-17 00:48:07.429769 | orchestrator | 2026-03-17 00:48:07.429773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429778 | orchestrator | Tuesday 17 March 2026 00:48:04 +0000 (0:00:00.594) 0:00:04.253 ********* 2026-03-17 00:48:07.429782 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d) 2026-03-17 00:48:07.429787 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d) 2026-03-17 00:48:07.429791 | orchestrator | 2026-03-17 00:48:07.429796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:07.429800 | orchestrator | Tuesday 17 March 2026 00:48:04 +0000 (0:00:00.608) 0:00:04.861 ********* 2026-03-17 00:48:07.429805 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:48:07.429810 | orchestrator | 2026-03-17 00:48:07.429814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:07.429818 | orchestrator | Tuesday 17 March 2026 00:48:05 +0000 (0:00:00.752) 0:00:05.613 ********* 2026-03-17 00:48:07.429823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:48:07.429827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:48:07.429831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:48:07.429836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:48:07.429840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:48:07.429844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:48:07.429849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:48:07.429853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:48:07.429858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-17 00:48:07.429862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:48:07.429866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:48:07.429871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:48:07.429875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:48:07.429880 | orchestrator | 2026-03-17 00:48:07.429884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:07.429889 | orchestrator | Tuesday 17 March 2026 00:48:06 +0000 (0:00:00.383) 0:00:05.997 ********* 2026-03-17 00:48:07.429893 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429897 | orchestrator | 2026-03-17 00:48:07.429902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:07.429906 | orchestrator | Tuesday 17 March 2026 00:48:06 +0000 (0:00:00.195) 0:00:06.192 ********* 2026-03-17 00:48:07.429910 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429915 | orchestrator | 2026-03-17 00:48:07.429919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:07.429924 | orchestrator | Tuesday 17 March 2026 00:48:06 +0000 (0:00:00.197) 0:00:06.390 ********* 2026-03-17 00:48:07.429928 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429936 | orchestrator | 2026-03-17 00:48:07.429940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:07.429945 | orchestrator | Tuesday 17 March 2026 00:48:06 +0000 (0:00:00.193) 0:00:06.584 ********* 2026-03-17 00:48:07.429949 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429954 | orchestrator | 2026-03-17 00:48:07.429958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:07.429962 | orchestrator | Tuesday 17 March 2026 00:48:06 +0000 (0:00:00.181) 0:00:06.765 ********* 2026-03-17 00:48:07.429967 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429971 | orchestrator | 2026-03-17 00:48:07.429976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:07.429980 | orchestrator | Tuesday 17 March 2026 00:48:07 +0000 (0:00:00.193) 0:00:06.958 ********* 2026-03-17 00:48:07.429985 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.429989 | orchestrator | 2026-03-17 00:48:07.429994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:07.429998 | orchestrator | Tuesday 17 March 2026 00:48:07 +0000 (0:00:00.185) 0:00:07.144 ********* 2026-03-17 00:48:07.430002 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:07.430007 | orchestrator | 2026-03-17 00:48:07.430058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:14.828364 | orchestrator | Tuesday 17 March 2026 00:48:07 +0000 (0:00:00.190) 0:00:07.335 ********* 2026-03-17 00:48:14.828462 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.828475 | orchestrator | 2026-03-17 00:48:14.828486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:14.828495 | orchestrator | Tuesday 17 March 2026 00:48:07 +0000 (0:00:00.193) 0:00:07.529 ********* 2026-03-17 00:48:14.828505 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-17 00:48:14.828515 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-17 00:48:14.828546 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-17 00:48:14.828555 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-17 00:48:14.828564 | orchestrator | 2026-03-17 00:48:14.828574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:14.828600 | orchestrator | Tuesday 17 March 2026 00:48:08 +0000 (0:00:00.932) 0:00:08.461 ********* 2026-03-17 00:48:14.828610 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.828619 | orchestrator | 2026-03-17 00:48:14.828628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:14.828638 | orchestrator | Tuesday 17 March 2026 00:48:08 +0000 (0:00:00.228) 0:00:08.690 ********* 2026-03-17 00:48:14.828647 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.828655 | orchestrator | 2026-03-17 00:48:14.828664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:14.828674 | orchestrator | Tuesday 17 March 2026 00:48:08 +0000 (0:00:00.194) 0:00:08.884 ********* 2026-03-17 00:48:14.828682 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.828691 | orchestrator | 2026-03-17 00:48:14.828700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:14.828709 | orchestrator | Tuesday 17 March 2026 00:48:09 +0000 (0:00:00.210) 0:00:09.094 ********* 2026-03-17 00:48:14.828718 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.828727 | orchestrator | 2026-03-17 00:48:14.828736 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-17 00:48:14.828745 | orchestrator | Tuesday 17 March 2026 00:48:09 +0000 (0:00:00.194) 0:00:09.288 ********* 2026-03-17 00:48:14.828754 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-17 00:48:14.828763 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-17 00:48:14.828772 | orchestrator | 2026-03-17 00:48:14.828781 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-17 00:48:14.828789 | orchestrator | Tuesday 17 March 2026 00:48:09 +0000 (0:00:00.161) 0:00:09.450 ********* 2026-03-17 00:48:14.828815 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.828825 | orchestrator | 2026-03-17 00:48:14.828834 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-17 00:48:14.828842 | orchestrator | Tuesday 17 March 2026 00:48:09 +0000 (0:00:00.130) 0:00:09.581 ********* 2026-03-17 00:48:14.828851 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.828860 | orchestrator | 2026-03-17 00:48:14.828869 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-17 00:48:14.828877 | orchestrator | Tuesday 17 March 2026 00:48:09 +0000 (0:00:00.126) 0:00:09.708 ********* 2026-03-17 00:48:14.828886 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.828905 | orchestrator | 2026-03-17 00:48:14.828915 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-17 00:48:14.828925 | orchestrator | Tuesday 17 March 2026 00:48:09 +0000 (0:00:00.118) 0:00:09.826 ********* 2026-03-17 00:48:14.828935 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:48:14.828945 | orchestrator | 2026-03-17 00:48:14.828955 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-17 00:48:14.828965 | orchestrator | Tuesday 17 March 2026 00:48:10 +0000 (0:00:00.122) 0:00:09.948 ********* 2026-03-17 00:48:14.828976 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16ca22cf-64f9-579d-994c-d43933026c5f'}}) 2026-03-17 00:48:14.828986 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'}}) 2026-03-17 00:48:14.828996 | orchestrator | 2026-03-17 00:48:14.829006 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-17 00:48:14.829016 | orchestrator | Tuesday 17 March 2026 00:48:10 +0000 (0:00:00.160) 0:00:10.109 ********* 2026-03-17 00:48:14.829027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16ca22cf-64f9-579d-994c-d43933026c5f'}})  2026-03-17 00:48:14.829045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'}})  2026-03-17 00:48:14.829062 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.829073 | orchestrator | 2026-03-17 00:48:14.829083 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-17 00:48:14.829093 | orchestrator | Tuesday 17 March 2026 00:48:10 +0000 (0:00:00.142) 0:00:10.252 ********* 2026-03-17 00:48:14.829103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16ca22cf-64f9-579d-994c-d43933026c5f'}})  2026-03-17 00:48:14.829113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'}})  2026-03-17 00:48:14.829123 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.829133 | orchestrator | 2026-03-17 00:48:14.829143 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-17 00:48:14.829153 | orchestrator | Tuesday 17 March 2026 00:48:10 +0000 (0:00:00.140) 0:00:10.392 ********* 2026-03-17 00:48:14.829165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16ca22cf-64f9-579d-994c-d43933026c5f'}})  2026-03-17 00:48:14.829189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'}})  2026-03-17 00:48:14.829200 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.829210 | orchestrator | 2026-03-17 00:48:14.829220 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-17 00:48:14.829230 | orchestrator | Tuesday 17 March 2026 00:48:10 +0000 (0:00:00.355) 0:00:10.747 ********* 2026-03-17 00:48:14.829240 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:48:14.829250 | orchestrator | 2026-03-17 00:48:14.829261 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-17 00:48:14.829271 | orchestrator | Tuesday 17 March 2026 00:48:10 +0000 (0:00:00.139) 0:00:10.887 ********* 2026-03-17 00:48:14.829281 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:48:14.829301 | orchestrator | 2026-03-17 00:48:14.829316 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-17 00:48:14.829331 | orchestrator | Tuesday 17 March 2026 00:48:11 +0000 (0:00:00.133) 0:00:11.021 ********* 2026-03-17 00:48:14.829343 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.829356 | orchestrator | 2026-03-17 00:48:14.829382 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-17 00:48:14.829395 | orchestrator | Tuesday 17 March 2026 00:48:11 +0000 (0:00:00.145) 0:00:11.166 ********* 2026-03-17 00:48:14.829409 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.829423 | orchestrator | 2026-03-17 00:48:14.829437 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-17 00:48:14.829450 | orchestrator | Tuesday 17 March 2026 00:48:11 +0000 (0:00:00.120) 0:00:11.287 ********* 2026-03-17 00:48:14.829464 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.829479 | orchestrator | 2026-03-17 00:48:14.829494 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-17 00:48:14.829508 | orchestrator | Tuesday 17 March 2026 00:48:11 +0000 (0:00:00.131) 0:00:11.418 ********* 2026-03-17 00:48:14.829607 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:48:14.829624 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:48:14.829639 | orchestrator |  "sdb": { 2026-03-17 00:48:14.829655 | orchestrator |  "osd_lvm_uuid": "16ca22cf-64f9-579d-994c-d43933026c5f" 2026-03-17 00:48:14.829671 | orchestrator |  }, 2026-03-17 00:48:14.829686 | orchestrator |  "sdc": { 2026-03-17 00:48:14.829702 | orchestrator |  "osd_lvm_uuid": "b13aeae0-05c6-5bfd-ada4-b68b1762c1d5" 2026-03-17 00:48:14.829716 | orchestrator |  } 2026-03-17 00:48:14.829732 | orchestrator |  } 2026-03-17 00:48:14.829748 | orchestrator | } 2026-03-17 00:48:14.829763 | orchestrator | 2026-03-17 00:48:14.829778 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-17 00:48:14.829793 | orchestrator | Tuesday 17 March 2026 00:48:11 +0000 (0:00:00.148) 0:00:11.566 ********* 2026-03-17 00:48:14.829808 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.829822 | orchestrator | 2026-03-17 00:48:14.829837 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-17 00:48:14.829852 | orchestrator | Tuesday 17 March 2026 00:48:11 +0000 (0:00:00.121) 0:00:11.688 ********* 2026-03-17 00:48:14.829866 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.829881 | orchestrator | 2026-03-17 00:48:14.829897 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-17 00:48:14.829912 | orchestrator | Tuesday 17 March 2026 00:48:11 +0000 (0:00:00.127) 0:00:11.815 ********* 2026-03-17 00:48:14.829926 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:48:14.829941 | orchestrator | 2026-03-17 00:48:14.829956 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-17 00:48:14.829969 | orchestrator | Tuesday 17 March 2026 00:48:12 +0000 (0:00:00.123) 0:00:11.938 ********* 2026-03-17 00:48:14.829983 | orchestrator | changed: [testbed-node-3] => { 2026-03-17 00:48:14.829997 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-17 00:48:14.830011 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:48:14.830150 | orchestrator |  "sdb": { 2026-03-17 00:48:14.830166 | orchestrator |  "osd_lvm_uuid": "16ca22cf-64f9-579d-994c-d43933026c5f" 2026-03-17 00:48:14.830181 | orchestrator |  }, 2026-03-17 00:48:14.830196 | orchestrator |  "sdc": { 2026-03-17 00:48:14.830210 | orchestrator |  "osd_lvm_uuid": "b13aeae0-05c6-5bfd-ada4-b68b1762c1d5" 2026-03-17 00:48:14.830225 | orchestrator |  } 2026-03-17 00:48:14.830240 | orchestrator |  }, 2026-03-17 00:48:14.830255 | orchestrator |  "lvm_volumes": [ 2026-03-17 00:48:14.830269 | orchestrator |  { 2026-03-17 00:48:14.830284 | orchestrator |  "data": "osd-block-16ca22cf-64f9-579d-994c-d43933026c5f", 2026-03-17 00:48:14.830299 | orchestrator |  "data_vg": "ceph-16ca22cf-64f9-579d-994c-d43933026c5f" 2026-03-17 00:48:14.830328 | orchestrator |  }, 2026-03-17 00:48:14.830345 | orchestrator |  { 2026-03-17 00:48:14.830359 | orchestrator |  "data": "osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5", 2026-03-17 00:48:14.830374 | orchestrator |  "data_vg": "ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5" 2026-03-17 00:48:14.830389 | orchestrator |  } 2026-03-17 00:48:14.830403 | orchestrator |  ] 2026-03-17 00:48:14.830418 | orchestrator |  } 2026-03-17 00:48:14.830432 | orchestrator | } 2026-03-17 00:48:14.830447 | orchestrator | 2026-03-17 00:48:14.830462 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-17 00:48:14.830471 | orchestrator | Tuesday 17 March 2026 00:48:12 +0000 (0:00:00.185) 0:00:12.124 ********* 2026-03-17 00:48:14.830480 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 00:48:14.830488 | orchestrator | 2026-03-17 00:48:14.830497 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-17 00:48:14.830505 | orchestrator | 2026-03-17 00:48:14.830514 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:48:14.830576 | orchestrator | Tuesday 17 March 2026 00:48:14 +0000 (0:00:02.136) 0:00:14.261 ********* 2026-03-17 00:48:14.830585 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-17 00:48:14.830594 | orchestrator | 2026-03-17 00:48:14.830603 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:48:14.830612 | orchestrator | Tuesday 17 March 2026 00:48:14 +0000 (0:00:00.240) 0:00:14.502 ********* 2026-03-17 00:48:14.830621 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:48:14.830629 | orchestrator | 2026-03-17 00:48:14.830649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.376087 | orchestrator | Tuesday 17 March 2026 00:48:14 +0000 (0:00:00.230) 0:00:14.732 ********* 2026-03-17 00:48:21.376253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:48:21.376268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:48:21.376278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:48:21.376287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:48:21.376295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:48:21.376304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:48:21.376313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:48:21.376325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:48:21.376334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-17 00:48:21.376344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:48:21.376354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:48:21.376369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:48:21.376408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:48:21.376430 | orchestrator | 2026-03-17 00:48:21.376446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.376625 | orchestrator | Tuesday 17 March 2026 00:48:15 +0000 (0:00:00.370) 0:00:15.103 ********* 2026-03-17 00:48:21.376639 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.376650 | orchestrator | 2026-03-17 00:48:21.376661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.376671 | orchestrator | Tuesday 17 March 2026 00:48:15 +0000 (0:00:00.217) 0:00:15.321 ********* 2026-03-17 00:48:21.376700 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.376710 | orchestrator | 2026-03-17 00:48:21.376720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.376730 | orchestrator | Tuesday 17 March 2026 00:48:15 +0000 (0:00:00.192) 0:00:15.513 ********* 2026-03-17 00:48:21.376740 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.376750 | orchestrator | 2026-03-17 00:48:21.376760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.376770 | orchestrator | Tuesday 17 March 2026 00:48:15 +0000 (0:00:00.193) 0:00:15.707 ********* 2026-03-17 00:48:21.376780 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.376790 | orchestrator | 2026-03-17 00:48:21.376800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.376810 | orchestrator | Tuesday 17 March 2026 00:48:15 +0000 (0:00:00.187) 0:00:15.895 ********* 2026-03-17 00:48:21.376820 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.376830 | orchestrator | 2026-03-17 00:48:21.376839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.376849 | orchestrator | Tuesday 17 March 2026 00:48:16 +0000 (0:00:00.193) 0:00:16.088 ********* 2026-03-17 00:48:21.376859 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.376869 | orchestrator | 2026-03-17 00:48:21.376879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.376889 | orchestrator | Tuesday 17 March 2026 00:48:16 +0000 (0:00:00.551) 0:00:16.640 ********* 2026-03-17 00:48:21.376899 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.376909 | orchestrator | 2026-03-17 00:48:21.376954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.376964 | orchestrator | Tuesday 17 March 2026 00:48:16 +0000 (0:00:00.213) 0:00:16.853 ********* 2026-03-17 00:48:21.376974 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.376984 | orchestrator | 2026-03-17 00:48:21.376994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.377003 | orchestrator | Tuesday 17 March 2026 00:48:17 +0000 (0:00:00.200) 0:00:17.054 ********* 2026-03-17 00:48:21.377047 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88) 2026-03-17 00:48:21.377057 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88) 2026-03-17 00:48:21.377066 | orchestrator | 2026-03-17 00:48:21.377075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.377084 | orchestrator | Tuesday 17 March 2026 00:48:17 +0000 (0:00:00.361) 0:00:17.416 ********* 2026-03-17 00:48:21.377092 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235) 2026-03-17 00:48:21.377101 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235) 2026-03-17 00:48:21.377133 | orchestrator | 2026-03-17 00:48:21.377144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.377153 | orchestrator | Tuesday 17 March 2026 00:48:17 +0000 (0:00:00.359) 0:00:17.776 ********* 2026-03-17 00:48:21.377162 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32) 2026-03-17 00:48:21.377171 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32) 2026-03-17 00:48:21.377179 | orchestrator | 2026-03-17 00:48:21.377188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.377246 | orchestrator | Tuesday 17 March 2026 00:48:18 +0000 (0:00:00.402) 0:00:18.179 ********* 2026-03-17 00:48:21.377288 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b) 2026-03-17 00:48:21.377298 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b) 2026-03-17 00:48:21.377307 | orchestrator | 2026-03-17 00:48:21.377356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:21.377365 | orchestrator | Tuesday 17 March 2026 00:48:18 +0000 (0:00:00.389) 0:00:18.568 ********* 2026-03-17 00:48:21.377374 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:48:21.377383 | orchestrator | 2026-03-17 00:48:21.377391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.377400 | orchestrator | Tuesday 17 March 2026 00:48:18 +0000 (0:00:00.249) 0:00:18.818 ********* 2026-03-17 00:48:21.377409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:48:21.377418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:48:21.377434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:48:21.377443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:48:21.377452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:48:21.377460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:48:21.377469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:48:21.377478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:48:21.377486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-17 00:48:21.377495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:48:21.377608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:48:21.377619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:48:21.377627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:48:21.377636 | orchestrator | 2026-03-17 00:48:21.377644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.377653 | orchestrator | Tuesday 17 March 2026 00:48:19 +0000 (0:00:00.279) 0:00:19.098 ********* 2026-03-17 00:48:21.377662 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.377671 | orchestrator | 2026-03-17 00:48:21.377679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.377688 | orchestrator | Tuesday 17 March 2026 00:48:19 +0000 (0:00:00.163) 0:00:19.261 ********* 2026-03-17 00:48:21.377697 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.377706 | orchestrator | 2026-03-17 00:48:21.377714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.377723 | orchestrator | Tuesday 17 March 2026 00:48:19 +0000 (0:00:00.413) 0:00:19.675 ********* 2026-03-17 00:48:21.377732 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.377741 | orchestrator | 2026-03-17 00:48:21.377749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.377758 | orchestrator | Tuesday 17 March 2026 00:48:19 +0000 (0:00:00.177) 0:00:19.852 ********* 2026-03-17 00:48:21.377766 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.377881 | orchestrator | 2026-03-17 00:48:21.377900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.377913 | orchestrator | Tuesday 17 March 2026 00:48:20 +0000 (0:00:00.216) 0:00:20.069 ********* 2026-03-17 00:48:21.378144 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.378162 | orchestrator | 2026-03-17 00:48:21.378171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.378179 | orchestrator | Tuesday 17 March 2026 00:48:20 +0000 (0:00:00.139) 0:00:20.209 ********* 2026-03-17 00:48:21.378188 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.378254 | orchestrator | 2026-03-17 00:48:21.378265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.378274 | orchestrator | Tuesday 17 March 2026 00:48:20 +0000 (0:00:00.137) 0:00:20.346 ********* 2026-03-17 00:48:21.378283 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.378292 | orchestrator | 2026-03-17 00:48:21.378301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.378310 | orchestrator | Tuesday 17 March 2026 00:48:20 +0000 (0:00:00.150) 0:00:20.496 ********* 2026-03-17 00:48:21.378318 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:21.378327 | orchestrator | 2026-03-17 00:48:21.378336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.378345 | orchestrator | Tuesday 17 March 2026 00:48:20 +0000 (0:00:00.177) 0:00:20.674 ********* 2026-03-17 00:48:21.378354 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-17 00:48:21.378363 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-17 00:48:21.378372 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-17 00:48:21.378458 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-17 00:48:21.378469 | orchestrator | 2026-03-17 00:48:21.378478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:21.378487 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:00.518) 0:00:21.193 ********* 2026-03-17 00:48:21.378519 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.868474 | orchestrator | 2026-03-17 00:48:26.868630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:26.868648 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:00.153) 0:00:21.347 ********* 2026-03-17 00:48:26.868660 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.868672 | orchestrator | 2026-03-17 00:48:26.868683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:26.868694 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:00.205) 0:00:21.552 ********* 2026-03-17 00:48:26.868705 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.868716 | orchestrator | 2026-03-17 00:48:26.868728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:26.868738 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:00.179) 0:00:21.732 ********* 2026-03-17 00:48:26.868749 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.868759 | orchestrator | 2026-03-17 00:48:26.868769 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-17 00:48:26.868780 | orchestrator | Tuesday 17 March 2026 00:48:21 +0000 (0:00:00.157) 0:00:21.889 ********* 2026-03-17 00:48:26.868791 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-17 00:48:26.868802 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-17 00:48:26.868813 | orchestrator | 2026-03-17 00:48:26.868824 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-17 00:48:26.868863 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.251) 0:00:22.140 ********* 2026-03-17 00:48:26.868875 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.868886 | orchestrator | 2026-03-17 00:48:26.868897 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-17 00:48:26.868908 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.117) 0:00:22.258 ********* 2026-03-17 00:48:26.868918 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.868929 | orchestrator | 2026-03-17 00:48:26.868939 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-17 00:48:26.868954 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.124) 0:00:22.383 ********* 2026-03-17 00:48:26.868965 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.868976 | orchestrator | 2026-03-17 00:48:26.868986 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-17 00:48:26.868997 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.117) 0:00:22.500 ********* 2026-03-17 00:48:26.869032 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:48:26.869044 | orchestrator | 2026-03-17 00:48:26.869055 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-17 00:48:26.869066 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.108) 0:00:22.609 ********* 2026-03-17 00:48:26.869078 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd77b95b6-dc37-5eed-9a6e-c7871424e120'}}) 2026-03-17 00:48:26.869088 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec88a4df-1f79-596d-b281-118c477c78df'}}) 2026-03-17 00:48:26.869098 | orchestrator | 2026-03-17 00:48:26.869109 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-17 00:48:26.869120 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.143) 0:00:22.752 ********* 2026-03-17 00:48:26.869131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd77b95b6-dc37-5eed-9a6e-c7871424e120'}})  2026-03-17 00:48:26.869144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec88a4df-1f79-596d-b281-118c477c78df'}})  2026-03-17 00:48:26.869155 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.869167 | orchestrator | 2026-03-17 00:48:26.869179 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-17 00:48:26.869190 | orchestrator | Tuesday 17 March 2026 00:48:22 +0000 (0:00:00.121) 0:00:22.874 ********* 2026-03-17 00:48:26.869200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd77b95b6-dc37-5eed-9a6e-c7871424e120'}})  2026-03-17 00:48:26.869210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec88a4df-1f79-596d-b281-118c477c78df'}})  2026-03-17 00:48:26.869222 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.869233 | orchestrator | 2026-03-17 00:48:26.869243 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-17 00:48:26.869253 | orchestrator | Tuesday 17 March 2026 00:48:23 +0000 (0:00:00.139) 0:00:23.014 ********* 2026-03-17 00:48:26.869264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd77b95b6-dc37-5eed-9a6e-c7871424e120'}})  2026-03-17 00:48:26.869275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec88a4df-1f79-596d-b281-118c477c78df'}})  2026-03-17 00:48:26.869285 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.869295 | orchestrator | 2026-03-17 00:48:26.869306 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-17 00:48:26.869316 | orchestrator | Tuesday 17 March 2026 00:48:23 +0000 (0:00:00.131) 0:00:23.145 ********* 2026-03-17 00:48:26.869327 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:48:26.869337 | orchestrator | 2026-03-17 00:48:26.869347 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-17 00:48:26.869358 | orchestrator | Tuesday 17 March 2026 00:48:23 +0000 (0:00:00.124) 0:00:23.270 ********* 2026-03-17 00:48:26.869369 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:48:26.869379 | orchestrator | 2026-03-17 00:48:26.869390 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-17 00:48:26.869400 | orchestrator | Tuesday 17 March 2026 00:48:23 +0000 (0:00:00.132) 0:00:23.403 ********* 2026-03-17 00:48:26.869428 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.869439 | orchestrator | 2026-03-17 00:48:26.869449 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-17 00:48:26.869460 | orchestrator | Tuesday 17 March 2026 00:48:23 +0000 (0:00:00.119) 0:00:23.523 ********* 2026-03-17 00:48:26.869471 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.869481 | orchestrator | 2026-03-17 00:48:26.869492 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-17 00:48:26.869502 | orchestrator | Tuesday 17 March 2026 00:48:23 +0000 (0:00:00.316) 0:00:23.839 ********* 2026-03-17 00:48:26.869512 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.869531 | orchestrator | 2026-03-17 00:48:26.869542 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-17 00:48:26.869618 | orchestrator | Tuesday 17 March 2026 00:48:24 +0000 (0:00:00.110) 0:00:23.950 ********* 2026-03-17 00:48:26.869631 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:48:26.869642 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:48:26.869653 | orchestrator |  "sdb": { 2026-03-17 00:48:26.869665 | orchestrator |  "osd_lvm_uuid": "d77b95b6-dc37-5eed-9a6e-c7871424e120" 2026-03-17 00:48:26.869676 | orchestrator |  }, 2026-03-17 00:48:26.869687 | orchestrator |  "sdc": { 2026-03-17 00:48:26.869698 | orchestrator |  "osd_lvm_uuid": "ec88a4df-1f79-596d-b281-118c477c78df" 2026-03-17 00:48:26.869709 | orchestrator |  } 2026-03-17 00:48:26.869720 | orchestrator |  } 2026-03-17 00:48:26.869732 | orchestrator | } 2026-03-17 00:48:26.869743 | orchestrator | 2026-03-17 00:48:26.869754 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-17 00:48:26.869765 | orchestrator | Tuesday 17 March 2026 00:48:24 +0000 (0:00:00.141) 0:00:24.092 ********* 2026-03-17 00:48:26.869776 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.869786 | orchestrator | 2026-03-17 00:48:26.869797 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-17 00:48:26.869808 | orchestrator | Tuesday 17 March 2026 00:48:24 +0000 (0:00:00.115) 0:00:24.207 ********* 2026-03-17 00:48:26.869820 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.869831 | orchestrator | 2026-03-17 00:48:26.869842 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-17 00:48:26.869854 | orchestrator | Tuesday 17 March 2026 00:48:24 +0000 (0:00:00.124) 0:00:24.332 ********* 2026-03-17 00:48:26.869864 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:48:26.869875 | orchestrator | 2026-03-17 00:48:26.869886 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-17 00:48:26.869904 | orchestrator | Tuesday 17 March 2026 00:48:24 +0000 (0:00:00.103) 0:00:24.435 ********* 2026-03-17 00:48:26.869916 | orchestrator | changed: [testbed-node-4] => { 2026-03-17 00:48:26.869928 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-17 00:48:26.869939 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:48:26.869950 | orchestrator |  "sdb": { 2026-03-17 00:48:26.869961 | orchestrator |  "osd_lvm_uuid": "d77b95b6-dc37-5eed-9a6e-c7871424e120" 2026-03-17 00:48:26.869972 | orchestrator |  }, 2026-03-17 00:48:26.869983 | orchestrator |  "sdc": { 2026-03-17 00:48:26.869994 | orchestrator |  "osd_lvm_uuid": "ec88a4df-1f79-596d-b281-118c477c78df" 2026-03-17 00:48:26.870005 | orchestrator |  } 2026-03-17 00:48:26.870071 | orchestrator |  }, 2026-03-17 00:48:26.870084 | orchestrator |  "lvm_volumes": [ 2026-03-17 00:48:26.870095 | orchestrator |  { 2026-03-17 00:48:26.870106 | orchestrator |  "data": "osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120", 2026-03-17 00:48:26.870117 | orchestrator |  "data_vg": "ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120" 2026-03-17 00:48:26.870128 | orchestrator |  }, 2026-03-17 00:48:26.870139 | orchestrator |  { 2026-03-17 00:48:26.870150 | orchestrator |  "data": "osd-block-ec88a4df-1f79-596d-b281-118c477c78df", 2026-03-17 00:48:26.870161 | orchestrator |  "data_vg": "ceph-ec88a4df-1f79-596d-b281-118c477c78df" 2026-03-17 00:48:26.870171 | orchestrator |  } 2026-03-17 00:48:26.870182 | orchestrator |  ] 2026-03-17 00:48:26.870193 | orchestrator |  } 2026-03-17 00:48:26.870204 | orchestrator | } 2026-03-17 00:48:26.870216 | orchestrator | 2026-03-17 00:48:26.870227 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-17 00:48:26.870238 | orchestrator | Tuesday 17 March 2026 00:48:24 +0000 (0:00:00.183) 0:00:24.619 ********* 2026-03-17 00:48:26.870249 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-17 00:48:26.870259 | orchestrator | 2026-03-17 00:48:26.870278 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-17 00:48:26.870289 | orchestrator | 2026-03-17 00:48:26.870301 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:48:26.870312 | orchestrator | Tuesday 17 March 2026 00:48:25 +0000 (0:00:01.055) 0:00:25.674 ********* 2026-03-17 00:48:26.870323 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-17 00:48:26.870334 | orchestrator | 2026-03-17 00:48:26.870345 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:48:26.870356 | orchestrator | Tuesday 17 March 2026 00:48:26 +0000 (0:00:00.370) 0:00:26.044 ********* 2026-03-17 00:48:26.870367 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:48:26.870378 | orchestrator | 2026-03-17 00:48:26.870389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:26.870400 | orchestrator | Tuesday 17 March 2026 00:48:26 +0000 (0:00:00.473) 0:00:26.518 ********* 2026-03-17 00:48:26.870411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:48:26.870422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:48:26.870433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:48:26.870444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:48:26.870455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:48:26.870475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:48:34.159834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:48:34.159930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:48:34.159939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-17 00:48:34.159946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:48:34.159952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:48:34.159959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:48:34.159965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:48:34.159972 | orchestrator | 2026-03-17 00:48:34.159980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.159989 | orchestrator | Tuesday 17 March 2026 00:48:26 +0000 (0:00:00.331) 0:00:26.849 ********* 2026-03-17 00:48:34.159996 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160003 | orchestrator | 2026-03-17 00:48:34.160009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160016 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.178) 0:00:27.028 ********* 2026-03-17 00:48:34.160021 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160027 | orchestrator | 2026-03-17 00:48:34.160034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160040 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.179) 0:00:27.208 ********* 2026-03-17 00:48:34.160046 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160053 | orchestrator | 2026-03-17 00:48:34.160060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160066 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.178) 0:00:27.386 ********* 2026-03-17 00:48:34.160073 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160078 | orchestrator | 2026-03-17 00:48:34.160085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160091 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.158) 0:00:27.544 ********* 2026-03-17 00:48:34.160118 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160123 | orchestrator | 2026-03-17 00:48:34.160129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160135 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.185) 0:00:27.730 ********* 2026-03-17 00:48:34.160141 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160148 | orchestrator | 2026-03-17 00:48:34.160154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160161 | orchestrator | Tuesday 17 March 2026 00:48:27 +0000 (0:00:00.168) 0:00:27.899 ********* 2026-03-17 00:48:34.160168 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160174 | orchestrator | 2026-03-17 00:48:34.160181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160188 | orchestrator | Tuesday 17 March 2026 00:48:28 +0000 (0:00:00.215) 0:00:28.115 ********* 2026-03-17 00:48:34.160195 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160201 | orchestrator | 2026-03-17 00:48:34.160206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160212 | orchestrator | Tuesday 17 March 2026 00:48:28 +0000 (0:00:00.183) 0:00:28.298 ********* 2026-03-17 00:48:34.160218 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0) 2026-03-17 00:48:34.160226 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0) 2026-03-17 00:48:34.160232 | orchestrator | 2026-03-17 00:48:34.160238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160244 | orchestrator | Tuesday 17 March 2026 00:48:28 +0000 (0:00:00.523) 0:00:28.822 ********* 2026-03-17 00:48:34.160264 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7) 2026-03-17 00:48:34.160269 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7) 2026-03-17 00:48:34.160276 | orchestrator | 2026-03-17 00:48:34.160282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160288 | orchestrator | Tuesday 17 March 2026 00:48:29 +0000 (0:00:00.850) 0:00:29.673 ********* 2026-03-17 00:48:34.160294 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865) 2026-03-17 00:48:34.160300 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865) 2026-03-17 00:48:34.160305 | orchestrator | 2026-03-17 00:48:34.160311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160317 | orchestrator | Tuesday 17 March 2026 00:48:30 +0000 (0:00:00.373) 0:00:30.047 ********* 2026-03-17 00:48:34.160323 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276) 2026-03-17 00:48:34.160328 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276) 2026-03-17 00:48:34.160334 | orchestrator | 2026-03-17 00:48:34.160339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:48:34.160345 | orchestrator | Tuesday 17 March 2026 00:48:30 +0000 (0:00:00.419) 0:00:30.466 ********* 2026-03-17 00:48:34.160351 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:48:34.160357 | orchestrator | 2026-03-17 00:48:34.160364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160385 | orchestrator | Tuesday 17 March 2026 00:48:30 +0000 (0:00:00.288) 0:00:30.755 ********* 2026-03-17 00:48:34.160395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:48:34.160406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:48:34.160415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:48:34.160424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:48:34.160440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:48:34.160450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:48:34.160460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:48:34.160469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:48:34.160479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-17 00:48:34.160488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:48:34.160497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:48:34.160507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:48:34.160515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:48:34.160525 | orchestrator | 2026-03-17 00:48:34.160536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160545 | orchestrator | Tuesday 17 March 2026 00:48:31 +0000 (0:00:00.332) 0:00:31.088 ********* 2026-03-17 00:48:34.160555 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160562 | orchestrator | 2026-03-17 00:48:34.160587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160594 | orchestrator | Tuesday 17 March 2026 00:48:31 +0000 (0:00:00.172) 0:00:31.260 ********* 2026-03-17 00:48:34.160600 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160606 | orchestrator | 2026-03-17 00:48:34.160611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160617 | orchestrator | Tuesday 17 March 2026 00:48:31 +0000 (0:00:00.197) 0:00:31.457 ********* 2026-03-17 00:48:34.160622 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160628 | orchestrator | 2026-03-17 00:48:34.160634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160640 | orchestrator | Tuesday 17 March 2026 00:48:31 +0000 (0:00:00.178) 0:00:31.636 ********* 2026-03-17 00:48:34.160645 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160651 | orchestrator | 2026-03-17 00:48:34.160657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160663 | orchestrator | Tuesday 17 March 2026 00:48:31 +0000 (0:00:00.221) 0:00:31.857 ********* 2026-03-17 00:48:34.160668 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160674 | orchestrator | 2026-03-17 00:48:34.160680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160686 | orchestrator | Tuesday 17 March 2026 00:48:32 +0000 (0:00:00.165) 0:00:32.022 ********* 2026-03-17 00:48:34.160692 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160698 | orchestrator | 2026-03-17 00:48:34.160703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160708 | orchestrator | Tuesday 17 March 2026 00:48:32 +0000 (0:00:00.448) 0:00:32.471 ********* 2026-03-17 00:48:34.160714 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160719 | orchestrator | 2026-03-17 00:48:34.160725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160731 | orchestrator | Tuesday 17 March 2026 00:48:32 +0000 (0:00:00.148) 0:00:32.620 ********* 2026-03-17 00:48:34.160737 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160744 | orchestrator | 2026-03-17 00:48:34.160749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160756 | orchestrator | Tuesday 17 March 2026 00:48:32 +0000 (0:00:00.222) 0:00:32.842 ********* 2026-03-17 00:48:34.160762 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-17 00:48:34.160778 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-17 00:48:34.160785 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-17 00:48:34.160791 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-17 00:48:34.160797 | orchestrator | 2026-03-17 00:48:34.160803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160809 | orchestrator | Tuesday 17 March 2026 00:48:33 +0000 (0:00:00.533) 0:00:33.376 ********* 2026-03-17 00:48:34.160815 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160821 | orchestrator | 2026-03-17 00:48:34.160826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160832 | orchestrator | Tuesday 17 March 2026 00:48:33 +0000 (0:00:00.152) 0:00:33.529 ********* 2026-03-17 00:48:34.160838 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160844 | orchestrator | 2026-03-17 00:48:34.160850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160855 | orchestrator | Tuesday 17 March 2026 00:48:33 +0000 (0:00:00.176) 0:00:33.705 ********* 2026-03-17 00:48:34.160861 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160866 | orchestrator | 2026-03-17 00:48:34.160872 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:48:34.160879 | orchestrator | Tuesday 17 March 2026 00:48:33 +0000 (0:00:00.163) 0:00:33.869 ********* 2026-03-17 00:48:34.160885 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:34.160890 | orchestrator | 2026-03-17 00:48:34.160902 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-17 00:48:37.926913 | orchestrator | Tuesday 17 March 2026 00:48:34 +0000 (0:00:00.201) 0:00:34.070 ********* 2026-03-17 00:48:37.926989 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-17 00:48:37.926996 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-17 00:48:37.927002 | orchestrator | 2026-03-17 00:48:37.927008 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-17 00:48:37.927013 | orchestrator | Tuesday 17 March 2026 00:48:34 +0000 (0:00:00.153) 0:00:34.224 ********* 2026-03-17 00:48:37.927018 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927023 | orchestrator | 2026-03-17 00:48:37.927028 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-17 00:48:37.927033 | orchestrator | Tuesday 17 March 2026 00:48:34 +0000 (0:00:00.115) 0:00:34.340 ********* 2026-03-17 00:48:37.927052 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927057 | orchestrator | 2026-03-17 00:48:37.927062 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-17 00:48:37.927067 | orchestrator | Tuesday 17 March 2026 00:48:34 +0000 (0:00:00.124) 0:00:34.464 ********* 2026-03-17 00:48:37.927072 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927077 | orchestrator | 2026-03-17 00:48:37.927083 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-17 00:48:37.927088 | orchestrator | Tuesday 17 March 2026 00:48:34 +0000 (0:00:00.139) 0:00:34.603 ********* 2026-03-17 00:48:37.927093 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:48:37.927099 | orchestrator | 2026-03-17 00:48:37.927104 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-17 00:48:37.927109 | orchestrator | Tuesday 17 March 2026 00:48:34 +0000 (0:00:00.260) 0:00:34.864 ********* 2026-03-17 00:48:37.927114 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50c44467-b3f7-539a-99b7-df2211d1583b'}}) 2026-03-17 00:48:37.927123 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9465b490-647b-5adb-8e2e-a5649c4bc673'}}) 2026-03-17 00:48:37.927128 | orchestrator | 2026-03-17 00:48:37.927133 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-17 00:48:37.927138 | orchestrator | Tuesday 17 March 2026 00:48:35 +0000 (0:00:00.176) 0:00:35.041 ********* 2026-03-17 00:48:37.927143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50c44467-b3f7-539a-99b7-df2211d1583b'}})  2026-03-17 00:48:37.927165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9465b490-647b-5adb-8e2e-a5649c4bc673'}})  2026-03-17 00:48:37.927171 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927176 | orchestrator | 2026-03-17 00:48:37.927181 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-17 00:48:37.927185 | orchestrator | Tuesday 17 March 2026 00:48:35 +0000 (0:00:00.133) 0:00:35.174 ********* 2026-03-17 00:48:37.927190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50c44467-b3f7-539a-99b7-df2211d1583b'}})  2026-03-17 00:48:37.927195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9465b490-647b-5adb-8e2e-a5649c4bc673'}})  2026-03-17 00:48:37.927200 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927205 | orchestrator | 2026-03-17 00:48:37.927210 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-17 00:48:37.927215 | orchestrator | Tuesday 17 March 2026 00:48:35 +0000 (0:00:00.111) 0:00:35.286 ********* 2026-03-17 00:48:37.927220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50c44467-b3f7-539a-99b7-df2211d1583b'}})  2026-03-17 00:48:37.927225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9465b490-647b-5adb-8e2e-a5649c4bc673'}})  2026-03-17 00:48:37.927230 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927235 | orchestrator | 2026-03-17 00:48:37.927239 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-17 00:48:37.927244 | orchestrator | Tuesday 17 March 2026 00:48:35 +0000 (0:00:00.210) 0:00:35.496 ********* 2026-03-17 00:48:37.927249 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:48:37.927254 | orchestrator | 2026-03-17 00:48:37.927259 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-17 00:48:37.927264 | orchestrator | Tuesday 17 March 2026 00:48:35 +0000 (0:00:00.155) 0:00:35.651 ********* 2026-03-17 00:48:37.927269 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:48:37.927273 | orchestrator | 2026-03-17 00:48:37.927278 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-17 00:48:37.927283 | orchestrator | Tuesday 17 March 2026 00:48:35 +0000 (0:00:00.145) 0:00:35.796 ********* 2026-03-17 00:48:37.927288 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927293 | orchestrator | 2026-03-17 00:48:37.927298 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-17 00:48:37.927303 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:00.127) 0:00:35.923 ********* 2026-03-17 00:48:37.927308 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927313 | orchestrator | 2026-03-17 00:48:37.927317 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-17 00:48:37.927322 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:00.119) 0:00:36.043 ********* 2026-03-17 00:48:37.927327 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927332 | orchestrator | 2026-03-17 00:48:37.927337 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-17 00:48:37.927342 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:00.114) 0:00:36.157 ********* 2026-03-17 00:48:37.927347 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:48:37.927351 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:48:37.927357 | orchestrator |  "sdb": { 2026-03-17 00:48:37.927372 | orchestrator |  "osd_lvm_uuid": "50c44467-b3f7-539a-99b7-df2211d1583b" 2026-03-17 00:48:37.927377 | orchestrator |  }, 2026-03-17 00:48:37.927382 | orchestrator |  "sdc": { 2026-03-17 00:48:37.927387 | orchestrator |  "osd_lvm_uuid": "9465b490-647b-5adb-8e2e-a5649c4bc673" 2026-03-17 00:48:37.927406 | orchestrator |  } 2026-03-17 00:48:37.927411 | orchestrator |  } 2026-03-17 00:48:37.927416 | orchestrator | } 2026-03-17 00:48:37.927421 | orchestrator | 2026-03-17 00:48:37.927490 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-17 00:48:37.927497 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:00.120) 0:00:36.277 ********* 2026-03-17 00:48:37.927502 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927508 | orchestrator | 2026-03-17 00:48:37.927513 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-17 00:48:37.927519 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:00.108) 0:00:36.386 ********* 2026-03-17 00:48:37.927524 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927529 | orchestrator | 2026-03-17 00:48:37.927535 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-17 00:48:37.927540 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:00.258) 0:00:36.644 ********* 2026-03-17 00:48:37.927546 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:48:37.927551 | orchestrator | 2026-03-17 00:48:37.927557 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-17 00:48:37.927562 | orchestrator | Tuesday 17 March 2026 00:48:36 +0000 (0:00:00.115) 0:00:36.759 ********* 2026-03-17 00:48:37.927567 | orchestrator | changed: [testbed-node-5] => { 2026-03-17 00:48:37.927573 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-17 00:48:37.927619 | orchestrator |  "ceph_osd_devices": { 2026-03-17 00:48:37.927625 | orchestrator |  "sdb": { 2026-03-17 00:48:37.927631 | orchestrator |  "osd_lvm_uuid": "50c44467-b3f7-539a-99b7-df2211d1583b" 2026-03-17 00:48:37.927637 | orchestrator |  }, 2026-03-17 00:48:37.927642 | orchestrator |  "sdc": { 2026-03-17 00:48:37.927648 | orchestrator |  "osd_lvm_uuid": "9465b490-647b-5adb-8e2e-a5649c4bc673" 2026-03-17 00:48:37.927653 | orchestrator |  } 2026-03-17 00:48:37.927659 | orchestrator |  }, 2026-03-17 00:48:37.927664 | orchestrator |  "lvm_volumes": [ 2026-03-17 00:48:37.927670 | orchestrator |  { 2026-03-17 00:48:37.927676 | orchestrator |  "data": "osd-block-50c44467-b3f7-539a-99b7-df2211d1583b", 2026-03-17 00:48:37.927681 | orchestrator |  "data_vg": "ceph-50c44467-b3f7-539a-99b7-df2211d1583b" 2026-03-17 00:48:37.927687 | orchestrator |  }, 2026-03-17 00:48:37.927695 | orchestrator |  { 2026-03-17 00:48:37.927703 | orchestrator |  "data": "osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673", 2026-03-17 00:48:37.927711 | orchestrator |  "data_vg": "ceph-9465b490-647b-5adb-8e2e-a5649c4bc673" 2026-03-17 00:48:37.927720 | orchestrator |  } 2026-03-17 00:48:37.927729 | orchestrator |  ] 2026-03-17 00:48:37.927737 | orchestrator |  } 2026-03-17 00:48:37.927744 | orchestrator | } 2026-03-17 00:48:37.927752 | orchestrator | 2026-03-17 00:48:37.927760 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-17 00:48:37.927768 | orchestrator | Tuesday 17 March 2026 00:48:37 +0000 (0:00:00.182) 0:00:36.941 ********* 2026-03-17 00:48:37.927775 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-17 00:48:37.927782 | orchestrator | 2026-03-17 00:48:37.927790 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:48:37.927799 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 00:48:37.927808 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 00:48:37.927816 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 00:48:37.927823 | orchestrator | 2026-03-17 00:48:37.927830 | orchestrator | 2026-03-17 00:48:37.927839 | orchestrator | 2026-03-17 00:48:37.927847 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:48:37.927853 | orchestrator | Tuesday 17 March 2026 00:48:37 +0000 (0:00:00.880) 0:00:37.822 ********* 2026-03-17 00:48:37.927863 | orchestrator | =============================================================================== 2026-03-17 00:48:37.927868 | orchestrator | Write configuration file ------------------------------------------------ 4.07s 2026-03-17 00:48:37.927873 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-03-17 00:48:37.927882 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-03-17 00:48:37.927887 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-17 00:48:37.927892 | orchestrator | Get initial list of available block devices ----------------------------- 0.92s 2026-03-17 00:48:37.927897 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2026-03-17 00:48:37.927902 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2026-03-17 00:48:37.927907 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-03-17 00:48:37.927912 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.70s 2026-03-17 00:48:37.927916 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-03-17 00:48:37.927921 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2026-03-17 00:48:37.927926 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.57s 2026-03-17 00:48:37.927931 | orchestrator | Set WAL devices config data --------------------------------------------- 0.56s 2026-03-17 00:48:37.927942 | orchestrator | Print configuration data ------------------------------------------------ 0.55s 2026-03-17 00:48:38.160782 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-03-17 00:48:38.160890 | orchestrator | Add known partitions to the list of available block devices ------------- 0.53s 2026-03-17 00:48:38.160906 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-03-17 00:48:38.160918 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2026-03-17 00:48:38.160929 | orchestrator | Print DB devices -------------------------------------------------------- 0.51s 2026-03-17 00:48:38.160957 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.49s 2026-03-17 00:48:59.714754 | orchestrator | 2026-03-17 00:48:59 | INFO  | Task 75966a71-95e6-493d-87e2-25f9c8dded85 (sync inventory) is running in background. Output coming soon. 2026-03-17 00:49:27.572985 | orchestrator | 2026-03-17 00:49:01 | INFO  | Starting group_vars file reorganization 2026-03-17 00:49:27.573066 | orchestrator | 2026-03-17 00:49:01 | INFO  | Moved 0 file(s) to their respective directories 2026-03-17 00:49:27.573073 | orchestrator | 2026-03-17 00:49:01 | INFO  | Group_vars file reorganization completed 2026-03-17 00:49:27.573077 | orchestrator | 2026-03-17 00:49:03 | INFO  | Starting variable preparation from inventory 2026-03-17 00:49:27.573082 | orchestrator | 2026-03-17 00:49:06 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-17 00:49:27.573087 | orchestrator | 2026-03-17 00:49:06 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-17 00:49:27.573110 | orchestrator | 2026-03-17 00:49:06 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-17 00:49:27.573117 | orchestrator | 2026-03-17 00:49:06 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-17 00:49:27.573123 | orchestrator | 2026-03-17 00:49:06 | INFO  | Variable preparation completed 2026-03-17 00:49:27.573129 | orchestrator | 2026-03-17 00:49:07 | INFO  | Starting inventory overwrite handling 2026-03-17 00:49:27.573135 | orchestrator | 2026-03-17 00:49:07 | INFO  | Handling group overwrites in 99-overwrite 2026-03-17 00:49:27.573141 | orchestrator | 2026-03-17 00:49:07 | INFO  | Removing group frr:children from 60-generic 2026-03-17 00:49:27.573166 | orchestrator | 2026-03-17 00:49:07 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-17 00:49:27.573173 | orchestrator | 2026-03-17 00:49:07 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-17 00:49:27.573180 | orchestrator | 2026-03-17 00:49:07 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-17 00:49:27.573187 | orchestrator | 2026-03-17 00:49:07 | INFO  | Handling group overwrites in 20-roles 2026-03-17 00:49:27.573193 | orchestrator | 2026-03-17 00:49:07 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-17 00:49:27.573199 | orchestrator | 2026-03-17 00:49:07 | INFO  | Removed 5 group(s) in total 2026-03-17 00:49:27.573205 | orchestrator | 2026-03-17 00:49:07 | INFO  | Inventory overwrite handling completed 2026-03-17 00:49:27.573211 | orchestrator | 2026-03-17 00:49:09 | INFO  | Starting merge of inventory files 2026-03-17 00:49:27.573218 | orchestrator | 2026-03-17 00:49:09 | INFO  | Inventory files merged successfully 2026-03-17 00:49:27.573225 | orchestrator | 2026-03-17 00:49:13 | INFO  | Generating minified hosts file 2026-03-17 00:49:27.573231 | orchestrator | 2026-03-17 00:49:14 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-03-17 00:49:27.573236 | orchestrator | 2026-03-17 00:49:14 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-03-17 00:49:27.573240 | orchestrator | 2026-03-17 00:49:15 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-17 00:49:27.573244 | orchestrator | 2026-03-17 00:49:26 | INFO  | Successfully wrote ClusterShell configuration 2026-03-17 00:49:27.573248 | orchestrator | [master 416797e] 2026-03-17-00-49 2026-03-17 00:49:27.573253 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-03-17 00:49:27.573258 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-03-17 00:49:27.573262 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-03-17 00:49:27.573266 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-03-17 00:49:28.798782 | orchestrator | 2026-03-17 00:49:28 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-17 00:49:28.856788 | orchestrator | 2026-03-17 00:49:28 | INFO  | Task 94d43817-5696-47af-a768-223131d387a9 (ceph-create-lvm-devices) was prepared for execution. 2026-03-17 00:49:28.856878 | orchestrator | 2026-03-17 00:49:28 | INFO  | It takes a moment until task 94d43817-5696-47af-a768-223131d387a9 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-17 00:49:39.405042 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 00:49:39.405150 | orchestrator | 2.16.14 2026-03-17 00:49:39.405165 | orchestrator | 2026-03-17 00:49:39.405174 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-17 00:49:39.405182 | orchestrator | 2026-03-17 00:49:39.405190 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:49:39.405198 | orchestrator | Tuesday 17 March 2026 00:49:32 +0000 (0:00:00.243) 0:00:00.243 ********* 2026-03-17 00:49:39.405205 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-17 00:49:39.405212 | orchestrator | 2026-03-17 00:49:39.405219 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:49:39.405227 | orchestrator | Tuesday 17 March 2026 00:49:33 +0000 (0:00:00.235) 0:00:00.479 ********* 2026-03-17 00:49:39.405233 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:39.405240 | orchestrator | 2026-03-17 00:49:39.405246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405253 | orchestrator | Tuesday 17 March 2026 00:49:33 +0000 (0:00:00.211) 0:00:00.691 ********* 2026-03-17 00:49:39.405280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:49:39.405287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:49:39.405294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:49:39.405300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:49:39.405307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:49:39.405313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:49:39.405319 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:49:39.405325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:49:39.405331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-17 00:49:39.405337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:49:39.405344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:49:39.405350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:49:39.405357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:49:39.405363 | orchestrator | 2026-03-17 00:49:39.405370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405376 | orchestrator | Tuesday 17 March 2026 00:49:33 +0000 (0:00:00.325) 0:00:01.016 ********* 2026-03-17 00:49:39.405383 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405389 | orchestrator | 2026-03-17 00:49:39.405395 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405401 | orchestrator | Tuesday 17 March 2026 00:49:33 +0000 (0:00:00.372) 0:00:01.389 ********* 2026-03-17 00:49:39.405407 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405414 | orchestrator | 2026-03-17 00:49:39.405420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405426 | orchestrator | Tuesday 17 March 2026 00:49:34 +0000 (0:00:00.175) 0:00:01.565 ********* 2026-03-17 00:49:39.405450 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405456 | orchestrator | 2026-03-17 00:49:39.405463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405469 | orchestrator | Tuesday 17 March 2026 00:49:34 +0000 (0:00:00.158) 0:00:01.724 ********* 2026-03-17 00:49:39.405475 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405482 | orchestrator | 2026-03-17 00:49:39.405488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405495 | orchestrator | Tuesday 17 March 2026 00:49:34 +0000 (0:00:00.197) 0:00:01.921 ********* 2026-03-17 00:49:39.405501 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405507 | orchestrator | 2026-03-17 00:49:39.405514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405521 | orchestrator | Tuesday 17 March 2026 00:49:34 +0000 (0:00:00.174) 0:00:02.096 ********* 2026-03-17 00:49:39.405528 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405534 | orchestrator | 2026-03-17 00:49:39.405542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405549 | orchestrator | Tuesday 17 March 2026 00:49:34 +0000 (0:00:00.174) 0:00:02.270 ********* 2026-03-17 00:49:39.405556 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405563 | orchestrator | 2026-03-17 00:49:39.405570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405577 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.170) 0:00:02.441 ********* 2026-03-17 00:49:39.405582 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405598 | orchestrator | 2026-03-17 00:49:39.405603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405610 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.177) 0:00:02.619 ********* 2026-03-17 00:49:39.405616 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb) 2026-03-17 00:49:39.405625 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb) 2026-03-17 00:49:39.405631 | orchestrator | 2026-03-17 00:49:39.405636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405658 | orchestrator | Tuesday 17 March 2026 00:49:35 +0000 (0:00:00.392) 0:00:03.012 ********* 2026-03-17 00:49:39.405665 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1) 2026-03-17 00:49:39.405671 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1) 2026-03-17 00:49:39.405677 | orchestrator | 2026-03-17 00:49:39.405682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405688 | orchestrator | Tuesday 17 March 2026 00:49:36 +0000 (0:00:00.390) 0:00:03.403 ********* 2026-03-17 00:49:39.405694 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184) 2026-03-17 00:49:39.405700 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184) 2026-03-17 00:49:39.405706 | orchestrator | 2026-03-17 00:49:39.405712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405719 | orchestrator | Tuesday 17 March 2026 00:49:36 +0000 (0:00:00.537) 0:00:03.940 ********* 2026-03-17 00:49:39.405755 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d) 2026-03-17 00:49:39.405761 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d) 2026-03-17 00:49:39.405768 | orchestrator | 2026-03-17 00:49:39.405774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:39.405779 | orchestrator | Tuesday 17 March 2026 00:49:37 +0000 (0:00:00.557) 0:00:04.498 ********* 2026-03-17 00:49:39.405785 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:49:39.405791 | orchestrator | 2026-03-17 00:49:39.405797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:39.405811 | orchestrator | Tuesday 17 March 2026 00:49:37 +0000 (0:00:00.605) 0:00:05.104 ********* 2026-03-17 00:49:39.405817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-17 00:49:39.405824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-17 00:49:39.405829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-17 00:49:39.405835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-17 00:49:39.405841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-17 00:49:39.405846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-17 00:49:39.405853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-17 00:49:39.405859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-17 00:49:39.405864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-17 00:49:39.405871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-17 00:49:39.405877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-17 00:49:39.405883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-17 00:49:39.405899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-17 00:49:39.405906 | orchestrator | 2026-03-17 00:49:39.405912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:39.405919 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:00.397) 0:00:05.501 ********* 2026-03-17 00:49:39.405926 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405933 | orchestrator | 2026-03-17 00:49:39.405939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:39.405947 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:00.196) 0:00:05.698 ********* 2026-03-17 00:49:39.405954 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405960 | orchestrator | 2026-03-17 00:49:39.405966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:39.405974 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:00.162) 0:00:05.861 ********* 2026-03-17 00:49:39.405980 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.405986 | orchestrator | 2026-03-17 00:49:39.405992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:39.405998 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:00.184) 0:00:06.046 ********* 2026-03-17 00:49:39.406005 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.406078 | orchestrator | 2026-03-17 00:49:39.406088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:39.406094 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:00.181) 0:00:06.227 ********* 2026-03-17 00:49:39.406099 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.406105 | orchestrator | 2026-03-17 00:49:39.406111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:39.406118 | orchestrator | Tuesday 17 March 2026 00:49:39 +0000 (0:00:00.193) 0:00:06.420 ********* 2026-03-17 00:49:39.406126 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.406133 | orchestrator | 2026-03-17 00:49:39.406141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:39.406149 | orchestrator | Tuesday 17 March 2026 00:49:39 +0000 (0:00:00.194) 0:00:06.615 ********* 2026-03-17 00:49:39.406155 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:39.406163 | orchestrator | 2026-03-17 00:49:39.406182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:46.771462 | orchestrator | Tuesday 17 March 2026 00:49:39 +0000 (0:00:00.180) 0:00:06.796 ********* 2026-03-17 00:49:46.771519 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771528 | orchestrator | 2026-03-17 00:49:46.771534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:46.771539 | orchestrator | Tuesday 17 March 2026 00:49:39 +0000 (0:00:00.195) 0:00:06.991 ********* 2026-03-17 00:49:46.771544 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-17 00:49:46.771549 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-17 00:49:46.771555 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-17 00:49:46.771560 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-17 00:49:46.771566 | orchestrator | 2026-03-17 00:49:46.771571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:46.771577 | orchestrator | Tuesday 17 March 2026 00:49:40 +0000 (0:00:00.931) 0:00:07.923 ********* 2026-03-17 00:49:46.771582 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771587 | orchestrator | 2026-03-17 00:49:46.771593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:46.771599 | orchestrator | Tuesday 17 March 2026 00:49:40 +0000 (0:00:00.221) 0:00:08.144 ********* 2026-03-17 00:49:46.771603 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771606 | orchestrator | 2026-03-17 00:49:46.771609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:46.771624 | orchestrator | Tuesday 17 March 2026 00:49:40 +0000 (0:00:00.208) 0:00:08.353 ********* 2026-03-17 00:49:46.771627 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771631 | orchestrator | 2026-03-17 00:49:46.771634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:49:46.771637 | orchestrator | Tuesday 17 March 2026 00:49:41 +0000 (0:00:00.225) 0:00:08.578 ********* 2026-03-17 00:49:46.771640 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771643 | orchestrator | 2026-03-17 00:49:46.771646 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-17 00:49:46.771650 | orchestrator | Tuesday 17 March 2026 00:49:41 +0000 (0:00:00.212) 0:00:08.790 ********* 2026-03-17 00:49:46.771653 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771656 | orchestrator | 2026-03-17 00:49:46.771659 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-17 00:49:46.771662 | orchestrator | Tuesday 17 March 2026 00:49:41 +0000 (0:00:00.119) 0:00:08.910 ********* 2026-03-17 00:49:46.771665 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '16ca22cf-64f9-579d-994c-d43933026c5f'}}) 2026-03-17 00:49:46.771669 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'}}) 2026-03-17 00:49:46.771672 | orchestrator | 2026-03-17 00:49:46.771675 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-17 00:49:46.771678 | orchestrator | Tuesday 17 March 2026 00:49:41 +0000 (0:00:00.182) 0:00:09.092 ********* 2026-03-17 00:49:46.771682 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'}) 2026-03-17 00:49:46.771686 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'}) 2026-03-17 00:49:46.771689 | orchestrator | 2026-03-17 00:49:46.771699 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-17 00:49:46.771702 | orchestrator | Tuesday 17 March 2026 00:49:43 +0000 (0:00:01.705) 0:00:10.798 ********* 2026-03-17 00:49:46.771706 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:46.771716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:46.771720 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771723 | orchestrator | 2026-03-17 00:49:46.771726 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-17 00:49:46.771729 | orchestrator | Tuesday 17 March 2026 00:49:43 +0000 (0:00:00.154) 0:00:10.953 ********* 2026-03-17 00:49:46.771732 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'}) 2026-03-17 00:49:46.771735 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'}) 2026-03-17 00:49:46.771775 | orchestrator | 2026-03-17 00:49:46.771779 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-17 00:49:46.771782 | orchestrator | Tuesday 17 March 2026 00:49:44 +0000 (0:00:01.321) 0:00:12.274 ********* 2026-03-17 00:49:46.771785 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:46.771788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:46.771791 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771795 | orchestrator | 2026-03-17 00:49:46.771798 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-17 00:49:46.771804 | orchestrator | Tuesday 17 March 2026 00:49:45 +0000 (0:00:00.147) 0:00:12.422 ********* 2026-03-17 00:49:46.771821 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771828 | orchestrator | 2026-03-17 00:49:46.771834 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-17 00:49:46.771838 | orchestrator | Tuesday 17 March 2026 00:49:45 +0000 (0:00:00.144) 0:00:12.566 ********* 2026-03-17 00:49:46.771844 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:46.771849 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:46.771854 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771859 | orchestrator | 2026-03-17 00:49:46.771865 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-17 00:49:46.771870 | orchestrator | Tuesday 17 March 2026 00:49:45 +0000 (0:00:00.318) 0:00:12.885 ********* 2026-03-17 00:49:46.771875 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771880 | orchestrator | 2026-03-17 00:49:46.771885 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-17 00:49:46.771891 | orchestrator | Tuesday 17 March 2026 00:49:45 +0000 (0:00:00.128) 0:00:13.013 ********* 2026-03-17 00:49:46.771896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:46.771901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:46.771906 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771911 | orchestrator | 2026-03-17 00:49:46.771920 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-17 00:49:46.771925 | orchestrator | Tuesday 17 March 2026 00:49:45 +0000 (0:00:00.147) 0:00:13.160 ********* 2026-03-17 00:49:46.771930 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771935 | orchestrator | 2026-03-17 00:49:46.771940 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-17 00:49:46.771945 | orchestrator | Tuesday 17 March 2026 00:49:45 +0000 (0:00:00.140) 0:00:13.301 ********* 2026-03-17 00:49:46.771951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:46.771956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:46.771961 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.771966 | orchestrator | 2026-03-17 00:49:46.771971 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-17 00:49:46.771976 | orchestrator | Tuesday 17 March 2026 00:49:46 +0000 (0:00:00.151) 0:00:13.453 ********* 2026-03-17 00:49:46.771982 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:46.771987 | orchestrator | 2026-03-17 00:49:46.771992 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-17 00:49:46.771998 | orchestrator | Tuesday 17 March 2026 00:49:46 +0000 (0:00:00.143) 0:00:13.596 ********* 2026-03-17 00:49:46.772003 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:46.772008 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:46.772013 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.772018 | orchestrator | 2026-03-17 00:49:46.772024 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-17 00:49:46.772033 | orchestrator | Tuesday 17 March 2026 00:49:46 +0000 (0:00:00.143) 0:00:13.739 ********* 2026-03-17 00:49:46.772038 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:46.772043 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:46.772048 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.772054 | orchestrator | 2026-03-17 00:49:46.772059 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-17 00:49:46.772065 | orchestrator | Tuesday 17 March 2026 00:49:46 +0000 (0:00:00.136) 0:00:13.876 ********* 2026-03-17 00:49:46.772070 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:46.772076 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:46.772082 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.772087 | orchestrator | 2026-03-17 00:49:46.772092 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-17 00:49:46.772102 | orchestrator | Tuesday 17 March 2026 00:49:46 +0000 (0:00:00.154) 0:00:14.031 ********* 2026-03-17 00:49:46.772107 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:46.772113 | orchestrator | 2026-03-17 00:49:46.772118 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-17 00:49:46.772128 | orchestrator | Tuesday 17 March 2026 00:49:46 +0000 (0:00:00.132) 0:00:14.163 ********* 2026-03-17 00:49:52.831072 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831134 | orchestrator | 2026-03-17 00:49:52.831143 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-17 00:49:52.831149 | orchestrator | Tuesday 17 March 2026 00:49:46 +0000 (0:00:00.134) 0:00:14.298 ********* 2026-03-17 00:49:52.831154 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831159 | orchestrator | 2026-03-17 00:49:52.831165 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-17 00:49:52.831170 | orchestrator | Tuesday 17 March 2026 00:49:47 +0000 (0:00:00.111) 0:00:14.409 ********* 2026-03-17 00:49:52.831176 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:49:52.831182 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-17 00:49:52.831188 | orchestrator | } 2026-03-17 00:49:52.831195 | orchestrator | 2026-03-17 00:49:52.831202 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-17 00:49:52.831207 | orchestrator | Tuesday 17 March 2026 00:49:47 +0000 (0:00:00.301) 0:00:14.711 ********* 2026-03-17 00:49:52.831212 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:49:52.831217 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-17 00:49:52.831223 | orchestrator | } 2026-03-17 00:49:52.831228 | orchestrator | 2026-03-17 00:49:52.831233 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-17 00:49:52.831238 | orchestrator | Tuesday 17 March 2026 00:49:47 +0000 (0:00:00.136) 0:00:14.847 ********* 2026-03-17 00:49:52.831244 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:49:52.831249 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-17 00:49:52.831255 | orchestrator | } 2026-03-17 00:49:52.831261 | orchestrator | 2026-03-17 00:49:52.831266 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-17 00:49:52.831272 | orchestrator | Tuesday 17 March 2026 00:49:47 +0000 (0:00:00.127) 0:00:14.975 ********* 2026-03-17 00:49:52.831277 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:52.831283 | orchestrator | 2026-03-17 00:49:52.831288 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-17 00:49:52.831293 | orchestrator | Tuesday 17 March 2026 00:49:48 +0000 (0:00:00.641) 0:00:15.616 ********* 2026-03-17 00:49:52.831312 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:52.831318 | orchestrator | 2026-03-17 00:49:52.831322 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-17 00:49:52.831325 | orchestrator | Tuesday 17 March 2026 00:49:48 +0000 (0:00:00.560) 0:00:16.177 ********* 2026-03-17 00:49:52.831328 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:52.831331 | orchestrator | 2026-03-17 00:49:52.831334 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-17 00:49:52.831338 | orchestrator | Tuesday 17 March 2026 00:49:49 +0000 (0:00:00.574) 0:00:16.751 ********* 2026-03-17 00:49:52.831341 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:52.831344 | orchestrator | 2026-03-17 00:49:52.831347 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-17 00:49:52.831350 | orchestrator | Tuesday 17 March 2026 00:49:49 +0000 (0:00:00.115) 0:00:16.867 ********* 2026-03-17 00:49:52.831353 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831356 | orchestrator | 2026-03-17 00:49:52.831360 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-17 00:49:52.831363 | orchestrator | Tuesday 17 March 2026 00:49:49 +0000 (0:00:00.109) 0:00:16.977 ********* 2026-03-17 00:49:52.831366 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831369 | orchestrator | 2026-03-17 00:49:52.831372 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-17 00:49:52.831375 | orchestrator | Tuesday 17 March 2026 00:49:49 +0000 (0:00:00.099) 0:00:17.076 ********* 2026-03-17 00:49:52.831379 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:49:52.831382 | orchestrator |  "vgs_report": { 2026-03-17 00:49:52.831385 | orchestrator |  "vg": [] 2026-03-17 00:49:52.831389 | orchestrator |  } 2026-03-17 00:49:52.831392 | orchestrator | } 2026-03-17 00:49:52.831395 | orchestrator | 2026-03-17 00:49:52.831398 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-17 00:49:52.831401 | orchestrator | Tuesday 17 March 2026 00:49:49 +0000 (0:00:00.134) 0:00:17.211 ********* 2026-03-17 00:49:52.831404 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831408 | orchestrator | 2026-03-17 00:49:52.831411 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-17 00:49:52.831414 | orchestrator | Tuesday 17 March 2026 00:49:49 +0000 (0:00:00.134) 0:00:17.346 ********* 2026-03-17 00:49:52.831418 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831421 | orchestrator | 2026-03-17 00:49:52.831424 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-17 00:49:52.831427 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:00.123) 0:00:17.469 ********* 2026-03-17 00:49:52.831452 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831455 | orchestrator | 2026-03-17 00:49:52.831459 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-17 00:49:52.831462 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:00.124) 0:00:17.594 ********* 2026-03-17 00:49:52.831465 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831468 | orchestrator | 2026-03-17 00:49:52.831472 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-17 00:49:52.831475 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:00.285) 0:00:17.879 ********* 2026-03-17 00:49:52.831478 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831481 | orchestrator | 2026-03-17 00:49:52.831485 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-17 00:49:52.831488 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:00.127) 0:00:18.007 ********* 2026-03-17 00:49:52.831491 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831494 | orchestrator | 2026-03-17 00:49:52.831497 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-17 00:49:52.831501 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:00.119) 0:00:18.126 ********* 2026-03-17 00:49:52.831504 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831511 | orchestrator | 2026-03-17 00:49:52.831514 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-17 00:49:52.831517 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:00.130) 0:00:18.257 ********* 2026-03-17 00:49:52.831530 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831534 | orchestrator | 2026-03-17 00:49:52.831545 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-17 00:49:52.831549 | orchestrator | Tuesday 17 March 2026 00:49:50 +0000 (0:00:00.120) 0:00:18.377 ********* 2026-03-17 00:49:52.831552 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831555 | orchestrator | 2026-03-17 00:49:52.831558 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-17 00:49:52.831562 | orchestrator | Tuesday 17 March 2026 00:49:51 +0000 (0:00:00.123) 0:00:18.500 ********* 2026-03-17 00:49:52.831565 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831568 | orchestrator | 2026-03-17 00:49:52.831571 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-17 00:49:52.831575 | orchestrator | Tuesday 17 March 2026 00:49:51 +0000 (0:00:00.126) 0:00:18.627 ********* 2026-03-17 00:49:52.831578 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831581 | orchestrator | 2026-03-17 00:49:52.831584 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-17 00:49:52.831588 | orchestrator | Tuesday 17 March 2026 00:49:51 +0000 (0:00:00.134) 0:00:18.762 ********* 2026-03-17 00:49:52.831591 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831594 | orchestrator | 2026-03-17 00:49:52.831597 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-17 00:49:52.831600 | orchestrator | Tuesday 17 March 2026 00:49:51 +0000 (0:00:00.144) 0:00:18.906 ********* 2026-03-17 00:49:52.831604 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831607 | orchestrator | 2026-03-17 00:49:52.831610 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-17 00:49:52.831616 | orchestrator | Tuesday 17 March 2026 00:49:51 +0000 (0:00:00.134) 0:00:19.041 ********* 2026-03-17 00:49:52.831621 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831630 | orchestrator | 2026-03-17 00:49:52.831638 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-17 00:49:52.831643 | orchestrator | Tuesday 17 March 2026 00:49:51 +0000 (0:00:00.158) 0:00:19.199 ********* 2026-03-17 00:49:52.831648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:52.831654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:52.831659 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831664 | orchestrator | 2026-03-17 00:49:52.831669 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-17 00:49:52.831674 | orchestrator | Tuesday 17 March 2026 00:49:51 +0000 (0:00:00.176) 0:00:19.375 ********* 2026-03-17 00:49:52.831679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:52.831685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:52.831691 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831697 | orchestrator | 2026-03-17 00:49:52.831702 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-17 00:49:52.831708 | orchestrator | Tuesday 17 March 2026 00:49:52 +0000 (0:00:00.294) 0:00:19.670 ********* 2026-03-17 00:49:52.831713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:52.831718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:52.831725 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831729 | orchestrator | 2026-03-17 00:49:52.831734 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-17 00:49:52.831739 | orchestrator | Tuesday 17 March 2026 00:49:52 +0000 (0:00:00.154) 0:00:19.825 ********* 2026-03-17 00:49:52.831745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:52.831750 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:52.831796 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831800 | orchestrator | 2026-03-17 00:49:52.831804 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-17 00:49:52.831809 | orchestrator | Tuesday 17 March 2026 00:49:52 +0000 (0:00:00.151) 0:00:19.976 ********* 2026-03-17 00:49:52.831815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:52.831821 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:52.831826 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:52.831832 | orchestrator | 2026-03-17 00:49:52.831837 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-17 00:49:52.831843 | orchestrator | Tuesday 17 March 2026 00:49:52 +0000 (0:00:00.175) 0:00:20.152 ********* 2026-03-17 00:49:52.831854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:58.771080 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:58.771131 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:58.771151 | orchestrator | 2026-03-17 00:49:58.771159 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-17 00:49:58.771166 | orchestrator | Tuesday 17 March 2026 00:49:52 +0000 (0:00:00.165) 0:00:20.318 ********* 2026-03-17 00:49:58.771181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:58.771188 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:58.771195 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:58.771199 | orchestrator | 2026-03-17 00:49:58.771203 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-17 00:49:58.771206 | orchestrator | Tuesday 17 March 2026 00:49:53 +0000 (0:00:00.159) 0:00:20.477 ********* 2026-03-17 00:49:58.771216 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:58.771229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:58.771233 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:58.771236 | orchestrator | 2026-03-17 00:49:58.771240 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-17 00:49:58.771249 | orchestrator | Tuesday 17 March 2026 00:49:53 +0000 (0:00:00.161) 0:00:20.638 ********* 2026-03-17 00:49:58.771253 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:58.771257 | orchestrator | 2026-03-17 00:49:58.771278 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-17 00:49:58.771282 | orchestrator | Tuesday 17 March 2026 00:49:53 +0000 (0:00:00.547) 0:00:21.185 ********* 2026-03-17 00:49:58.771286 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:58.771290 | orchestrator | 2026-03-17 00:49:58.771293 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-17 00:49:58.771297 | orchestrator | Tuesday 17 March 2026 00:49:54 +0000 (0:00:00.498) 0:00:21.684 ********* 2026-03-17 00:49:58.771301 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:49:58.771304 | orchestrator | 2026-03-17 00:49:58.771308 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-17 00:49:58.771312 | orchestrator | Tuesday 17 March 2026 00:49:54 +0000 (0:00:00.176) 0:00:21.860 ********* 2026-03-17 00:49:58.771316 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'vg_name': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'}) 2026-03-17 00:49:58.771320 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'vg_name': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'}) 2026-03-17 00:49:58.771324 | orchestrator | 2026-03-17 00:49:58.771328 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-17 00:49:58.771336 | orchestrator | Tuesday 17 March 2026 00:49:54 +0000 (0:00:00.212) 0:00:22.072 ********* 2026-03-17 00:49:58.771340 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:58.771344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:58.771348 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:58.771352 | orchestrator | 2026-03-17 00:49:58.771355 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-17 00:49:58.771359 | orchestrator | Tuesday 17 March 2026 00:49:54 +0000 (0:00:00.174) 0:00:22.246 ********* 2026-03-17 00:49:58.771363 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:58.771366 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:58.771370 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:58.771374 | orchestrator | 2026-03-17 00:49:58.771377 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-17 00:49:58.771381 | orchestrator | Tuesday 17 March 2026 00:49:55 +0000 (0:00:00.379) 0:00:22.626 ********* 2026-03-17 00:49:58.771385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'})  2026-03-17 00:49:58.771393 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'})  2026-03-17 00:49:58.771397 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:49:58.771401 | orchestrator | 2026-03-17 00:49:58.771407 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-17 00:49:58.771413 | orchestrator | Tuesday 17 March 2026 00:49:55 +0000 (0:00:00.194) 0:00:22.821 ********* 2026-03-17 00:49:58.771437 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 00:49:58.771441 | orchestrator |  "lvm_report": { 2026-03-17 00:49:58.771445 | orchestrator |  "lv": [ 2026-03-17 00:49:58.771449 | orchestrator |  { 2026-03-17 00:49:58.771455 | orchestrator |  "lv_name": "osd-block-16ca22cf-64f9-579d-994c-d43933026c5f", 2026-03-17 00:49:58.771469 | orchestrator |  "vg_name": "ceph-16ca22cf-64f9-579d-994c-d43933026c5f" 2026-03-17 00:49:58.771476 | orchestrator |  }, 2026-03-17 00:49:58.771488 | orchestrator |  { 2026-03-17 00:49:58.771495 | orchestrator |  "lv_name": "osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5", 2026-03-17 00:49:58.771509 | orchestrator |  "vg_name": "ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5" 2026-03-17 00:49:58.771516 | orchestrator |  } 2026-03-17 00:49:58.771522 | orchestrator |  ], 2026-03-17 00:49:58.771528 | orchestrator |  "pv": [ 2026-03-17 00:49:58.771535 | orchestrator |  { 2026-03-17 00:49:58.771547 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-17 00:49:58.771552 | orchestrator |  "vg_name": "ceph-16ca22cf-64f9-579d-994c-d43933026c5f" 2026-03-17 00:49:58.771559 | orchestrator |  }, 2026-03-17 00:49:58.771572 | orchestrator |  { 2026-03-17 00:49:58.771578 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-17 00:49:58.771585 | orchestrator |  "vg_name": "ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5" 2026-03-17 00:49:58.771590 | orchestrator |  } 2026-03-17 00:49:58.771593 | orchestrator |  ] 2026-03-17 00:49:58.771597 | orchestrator |  } 2026-03-17 00:49:58.771600 | orchestrator | } 2026-03-17 00:49:58.771604 | orchestrator | 2026-03-17 00:49:58.771608 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-17 00:49:58.771611 | orchestrator | 2026-03-17 00:49:58.771615 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:49:58.771619 | orchestrator | Tuesday 17 March 2026 00:49:55 +0000 (0:00:00.329) 0:00:23.150 ********* 2026-03-17 00:49:58.771623 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-17 00:49:58.771626 | orchestrator | 2026-03-17 00:49:58.771630 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:49:58.771633 | orchestrator | Tuesday 17 March 2026 00:49:56 +0000 (0:00:00.270) 0:00:23.421 ********* 2026-03-17 00:49:58.771637 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:49:58.771641 | orchestrator | 2026-03-17 00:49:58.771644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:58.771648 | orchestrator | Tuesday 17 March 2026 00:49:56 +0000 (0:00:00.242) 0:00:23.663 ********* 2026-03-17 00:49:58.771652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:49:58.771656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:49:58.771659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:49:58.771663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:49:58.771666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:49:58.771670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:49:58.771674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:49:58.771678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:49:58.771682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-17 00:49:58.771690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:49:58.771694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:49:58.771699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:49:58.771703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:49:58.771707 | orchestrator | 2026-03-17 00:49:58.771711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:58.771715 | orchestrator | Tuesday 17 March 2026 00:49:56 +0000 (0:00:00.508) 0:00:24.171 ********* 2026-03-17 00:49:58.771719 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:49:58.771727 | orchestrator | 2026-03-17 00:49:58.771731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:58.771736 | orchestrator | Tuesday 17 March 2026 00:49:57 +0000 (0:00:00.253) 0:00:24.425 ********* 2026-03-17 00:49:58.771740 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:49:58.771744 | orchestrator | 2026-03-17 00:49:58.771748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:58.771752 | orchestrator | Tuesday 17 March 2026 00:49:57 +0000 (0:00:00.220) 0:00:24.646 ********* 2026-03-17 00:49:58.771756 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:49:58.771760 | orchestrator | 2026-03-17 00:49:58.771794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:58.771799 | orchestrator | Tuesday 17 March 2026 00:49:57 +0000 (0:00:00.194) 0:00:24.840 ********* 2026-03-17 00:49:58.771803 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:49:58.771807 | orchestrator | 2026-03-17 00:49:58.771812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:58.771816 | orchestrator | Tuesday 17 March 2026 00:49:58 +0000 (0:00:00.808) 0:00:25.649 ********* 2026-03-17 00:49:58.771820 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:49:58.771824 | orchestrator | 2026-03-17 00:49:58.771828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:49:58.771832 | orchestrator | Tuesday 17 March 2026 00:49:58 +0000 (0:00:00.269) 0:00:25.919 ********* 2026-03-17 00:49:58.771836 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:49:58.771840 | orchestrator | 2026-03-17 00:49:58.771854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:10.244541 | orchestrator | Tuesday 17 March 2026 00:49:58 +0000 (0:00:00.243) 0:00:26.162 ********* 2026-03-17 00:50:10.244603 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.244609 | orchestrator | 2026-03-17 00:50:10.244614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:10.244618 | orchestrator | Tuesday 17 March 2026 00:49:59 +0000 (0:00:00.239) 0:00:26.402 ********* 2026-03-17 00:50:10.244622 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.244626 | orchestrator | 2026-03-17 00:50:10.244630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:10.244634 | orchestrator | Tuesday 17 March 2026 00:49:59 +0000 (0:00:00.233) 0:00:26.635 ********* 2026-03-17 00:50:10.244638 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88) 2026-03-17 00:50:10.244643 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88) 2026-03-17 00:50:10.244646 | orchestrator | 2026-03-17 00:50:10.244650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:10.244654 | orchestrator | Tuesday 17 March 2026 00:49:59 +0000 (0:00:00.467) 0:00:27.102 ********* 2026-03-17 00:50:10.244658 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235) 2026-03-17 00:50:10.244662 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235) 2026-03-17 00:50:10.244666 | orchestrator | 2026-03-17 00:50:10.244682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:10.244686 | orchestrator | Tuesday 17 March 2026 00:50:00 +0000 (0:00:00.559) 0:00:27.662 ********* 2026-03-17 00:50:10.244690 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32) 2026-03-17 00:50:10.244694 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32) 2026-03-17 00:50:10.244698 | orchestrator | 2026-03-17 00:50:10.244702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:10.244706 | orchestrator | Tuesday 17 March 2026 00:50:00 +0000 (0:00:00.483) 0:00:28.145 ********* 2026-03-17 00:50:10.244709 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b) 2026-03-17 00:50:10.244724 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b) 2026-03-17 00:50:10.244728 | orchestrator | 2026-03-17 00:50:10.244732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:10.244736 | orchestrator | Tuesday 17 March 2026 00:50:01 +0000 (0:00:00.479) 0:00:28.625 ********* 2026-03-17 00:50:10.244740 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:50:10.244744 | orchestrator | 2026-03-17 00:50:10.244747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.244751 | orchestrator | Tuesday 17 March 2026 00:50:01 +0000 (0:00:00.395) 0:00:29.020 ********* 2026-03-17 00:50:10.244755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-17 00:50:10.244759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-17 00:50:10.244763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-17 00:50:10.244767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-17 00:50:10.244771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-17 00:50:10.244775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-17 00:50:10.244778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-17 00:50:10.244783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-17 00:50:10.244816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-17 00:50:10.244824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-17 00:50:10.244829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-17 00:50:10.244833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-17 00:50:10.244837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-17 00:50:10.244841 | orchestrator | 2026-03-17 00:50:10.244845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.244849 | orchestrator | Tuesday 17 March 2026 00:50:02 +0000 (0:00:00.647) 0:00:29.668 ********* 2026-03-17 00:50:10.244852 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.244856 | orchestrator | 2026-03-17 00:50:10.244860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.244864 | orchestrator | Tuesday 17 March 2026 00:50:02 +0000 (0:00:00.233) 0:00:29.901 ********* 2026-03-17 00:50:10.244867 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.244871 | orchestrator | 2026-03-17 00:50:10.244875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.244879 | orchestrator | Tuesday 17 March 2026 00:50:02 +0000 (0:00:00.214) 0:00:30.116 ********* 2026-03-17 00:50:10.244883 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.244886 | orchestrator | 2026-03-17 00:50:10.244899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.244903 | orchestrator | Tuesday 17 March 2026 00:50:02 +0000 (0:00:00.204) 0:00:30.321 ********* 2026-03-17 00:50:10.244907 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.244913 | orchestrator | 2026-03-17 00:50:10.244919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.244929 | orchestrator | Tuesday 17 March 2026 00:50:03 +0000 (0:00:00.191) 0:00:30.513 ********* 2026-03-17 00:50:10.244939 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.244944 | orchestrator | 2026-03-17 00:50:10.244949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.244960 | orchestrator | Tuesday 17 March 2026 00:50:03 +0000 (0:00:00.300) 0:00:30.813 ********* 2026-03-17 00:50:10.244966 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.244972 | orchestrator | 2026-03-17 00:50:10.244978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.244983 | orchestrator | Tuesday 17 March 2026 00:50:03 +0000 (0:00:00.241) 0:00:31.055 ********* 2026-03-17 00:50:10.244989 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.244995 | orchestrator | 2026-03-17 00:50:10.245001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.245007 | orchestrator | Tuesday 17 March 2026 00:50:03 +0000 (0:00:00.206) 0:00:31.262 ********* 2026-03-17 00:50:10.245013 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.245020 | orchestrator | 2026-03-17 00:50:10.245026 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.245037 | orchestrator | Tuesday 17 March 2026 00:50:04 +0000 (0:00:00.217) 0:00:31.480 ********* 2026-03-17 00:50:10.245043 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-17 00:50:10.245047 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-17 00:50:10.245051 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-17 00:50:10.245054 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-17 00:50:10.245058 | orchestrator | 2026-03-17 00:50:10.245062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.245066 | orchestrator | Tuesday 17 March 2026 00:50:04 +0000 (0:00:00.912) 0:00:32.393 ********* 2026-03-17 00:50:10.245070 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.245074 | orchestrator | 2026-03-17 00:50:10.245077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.245081 | orchestrator | Tuesday 17 March 2026 00:50:05 +0000 (0:00:00.232) 0:00:32.625 ********* 2026-03-17 00:50:10.245085 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.245089 | orchestrator | 2026-03-17 00:50:10.245093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.245096 | orchestrator | Tuesday 17 March 2026 00:50:05 +0000 (0:00:00.241) 0:00:32.867 ********* 2026-03-17 00:50:10.245100 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.245104 | orchestrator | 2026-03-17 00:50:10.245108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:10.245112 | orchestrator | Tuesday 17 March 2026 00:50:06 +0000 (0:00:00.762) 0:00:33.629 ********* 2026-03-17 00:50:10.245116 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.245119 | orchestrator | 2026-03-17 00:50:10.245123 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-17 00:50:10.245127 | orchestrator | Tuesday 17 March 2026 00:50:06 +0000 (0:00:00.246) 0:00:33.876 ********* 2026-03-17 00:50:10.245131 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.245134 | orchestrator | 2026-03-17 00:50:10.245138 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-17 00:50:10.245142 | orchestrator | Tuesday 17 March 2026 00:50:06 +0000 (0:00:00.151) 0:00:34.027 ********* 2026-03-17 00:50:10.245146 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd77b95b6-dc37-5eed-9a6e-c7871424e120'}}) 2026-03-17 00:50:10.245150 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec88a4df-1f79-596d-b281-118c477c78df'}}) 2026-03-17 00:50:10.245154 | orchestrator | 2026-03-17 00:50:10.245158 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-17 00:50:10.245161 | orchestrator | Tuesday 17 March 2026 00:50:06 +0000 (0:00:00.250) 0:00:34.278 ********* 2026-03-17 00:50:10.245166 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'}) 2026-03-17 00:50:10.245170 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'}) 2026-03-17 00:50:10.245177 | orchestrator | 2026-03-17 00:50:10.245181 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-17 00:50:10.245185 | orchestrator | Tuesday 17 March 2026 00:50:08 +0000 (0:00:01.990) 0:00:36.269 ********* 2026-03-17 00:50:10.245189 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:10.245194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:10.245197 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:10.245201 | orchestrator | 2026-03-17 00:50:10.245205 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-17 00:50:10.245209 | orchestrator | Tuesday 17 March 2026 00:50:09 +0000 (0:00:00.195) 0:00:36.465 ********* 2026-03-17 00:50:10.245213 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'}) 2026-03-17 00:50:10.245222 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'}) 2026-03-17 00:50:16.196766 | orchestrator | 2026-03-17 00:50:16.196881 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-17 00:50:16.196896 | orchestrator | Tuesday 17 March 2026 00:50:10 +0000 (0:00:01.291) 0:00:37.757 ********* 2026-03-17 00:50:16.196905 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:16.196914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:16.196923 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.196932 | orchestrator | 2026-03-17 00:50:16.196940 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-17 00:50:16.196948 | orchestrator | Tuesday 17 March 2026 00:50:10 +0000 (0:00:00.162) 0:00:37.919 ********* 2026-03-17 00:50:16.196956 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.196964 | orchestrator | 2026-03-17 00:50:16.196972 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-17 00:50:16.196980 | orchestrator | Tuesday 17 March 2026 00:50:10 +0000 (0:00:00.144) 0:00:38.063 ********* 2026-03-17 00:50:16.196989 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:16.196997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:16.197005 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197013 | orchestrator | 2026-03-17 00:50:16.197021 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-17 00:50:16.197029 | orchestrator | Tuesday 17 March 2026 00:50:10 +0000 (0:00:00.190) 0:00:38.254 ********* 2026-03-17 00:50:16.197037 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197045 | orchestrator | 2026-03-17 00:50:16.197053 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-17 00:50:16.197061 | orchestrator | Tuesday 17 March 2026 00:50:10 +0000 (0:00:00.129) 0:00:38.383 ********* 2026-03-17 00:50:16.197069 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:16.197078 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:16.197100 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197108 | orchestrator | 2026-03-17 00:50:16.197116 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-17 00:50:16.197124 | orchestrator | Tuesday 17 March 2026 00:50:11 +0000 (0:00:00.191) 0:00:38.575 ********* 2026-03-17 00:50:16.197132 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197140 | orchestrator | 2026-03-17 00:50:16.197159 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-17 00:50:16.197168 | orchestrator | Tuesday 17 March 2026 00:50:11 +0000 (0:00:00.376) 0:00:38.951 ********* 2026-03-17 00:50:16.197176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:16.197184 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:16.197192 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197200 | orchestrator | 2026-03-17 00:50:16.197208 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-17 00:50:16.197216 | orchestrator | Tuesday 17 March 2026 00:50:11 +0000 (0:00:00.158) 0:00:39.110 ********* 2026-03-17 00:50:16.197224 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:16.197232 | orchestrator | 2026-03-17 00:50:16.197240 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-17 00:50:16.197248 | orchestrator | Tuesday 17 March 2026 00:50:11 +0000 (0:00:00.142) 0:00:39.252 ********* 2026-03-17 00:50:16.197256 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:16.197264 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:16.197272 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197280 | orchestrator | 2026-03-17 00:50:16.197288 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-17 00:50:16.197296 | orchestrator | Tuesday 17 March 2026 00:50:12 +0000 (0:00:00.177) 0:00:39.430 ********* 2026-03-17 00:50:16.197304 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:16.197312 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:16.197320 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197328 | orchestrator | 2026-03-17 00:50:16.197337 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-17 00:50:16.197359 | orchestrator | Tuesday 17 March 2026 00:50:12 +0000 (0:00:00.162) 0:00:39.592 ********* 2026-03-17 00:50:16.197369 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:16.197378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:16.197387 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197396 | orchestrator | 2026-03-17 00:50:16.197405 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-17 00:50:16.197414 | orchestrator | Tuesday 17 March 2026 00:50:12 +0000 (0:00:00.170) 0:00:39.763 ********* 2026-03-17 00:50:16.197423 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197431 | orchestrator | 2026-03-17 00:50:16.197441 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-17 00:50:16.197450 | orchestrator | Tuesday 17 March 2026 00:50:12 +0000 (0:00:00.129) 0:00:39.893 ********* 2026-03-17 00:50:16.197466 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197474 | orchestrator | 2026-03-17 00:50:16.197483 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-17 00:50:16.197496 | orchestrator | Tuesday 17 March 2026 00:50:12 +0000 (0:00:00.177) 0:00:40.071 ********* 2026-03-17 00:50:16.197505 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197514 | orchestrator | 2026-03-17 00:50:16.197523 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-17 00:50:16.197532 | orchestrator | Tuesday 17 March 2026 00:50:12 +0000 (0:00:00.147) 0:00:40.218 ********* 2026-03-17 00:50:16.197541 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:50:16.197550 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-17 00:50:16.197559 | orchestrator | } 2026-03-17 00:50:16.197568 | orchestrator | 2026-03-17 00:50:16.197577 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-17 00:50:16.197586 | orchestrator | Tuesday 17 March 2026 00:50:12 +0000 (0:00:00.153) 0:00:40.371 ********* 2026-03-17 00:50:16.197595 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:50:16.197604 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-17 00:50:16.197613 | orchestrator | } 2026-03-17 00:50:16.197622 | orchestrator | 2026-03-17 00:50:16.197631 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-17 00:50:16.197640 | orchestrator | Tuesday 17 March 2026 00:50:13 +0000 (0:00:00.154) 0:00:40.525 ********* 2026-03-17 00:50:16.197649 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:50:16.197658 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-17 00:50:16.197667 | orchestrator | } 2026-03-17 00:50:16.197675 | orchestrator | 2026-03-17 00:50:16.197685 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-17 00:50:16.197694 | orchestrator | Tuesday 17 March 2026 00:50:13 +0000 (0:00:00.166) 0:00:40.692 ********* 2026-03-17 00:50:16.197703 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:16.197712 | orchestrator | 2026-03-17 00:50:16.197721 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-17 00:50:16.197729 | orchestrator | Tuesday 17 March 2026 00:50:14 +0000 (0:00:00.728) 0:00:41.420 ********* 2026-03-17 00:50:16.197737 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:16.197745 | orchestrator | 2026-03-17 00:50:16.197753 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-17 00:50:16.197761 | orchestrator | Tuesday 17 March 2026 00:50:14 +0000 (0:00:00.506) 0:00:41.926 ********* 2026-03-17 00:50:16.197769 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:16.197777 | orchestrator | 2026-03-17 00:50:16.197785 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-17 00:50:16.197793 | orchestrator | Tuesday 17 March 2026 00:50:15 +0000 (0:00:00.489) 0:00:42.416 ********* 2026-03-17 00:50:16.197857 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:16.197865 | orchestrator | 2026-03-17 00:50:16.197873 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-17 00:50:16.197881 | orchestrator | Tuesday 17 March 2026 00:50:15 +0000 (0:00:00.171) 0:00:42.587 ********* 2026-03-17 00:50:16.197889 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197897 | orchestrator | 2026-03-17 00:50:16.197905 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-17 00:50:16.197913 | orchestrator | Tuesday 17 March 2026 00:50:15 +0000 (0:00:00.132) 0:00:42.720 ********* 2026-03-17 00:50:16.197920 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.197928 | orchestrator | 2026-03-17 00:50:16.197936 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-17 00:50:16.197944 | orchestrator | Tuesday 17 March 2026 00:50:15 +0000 (0:00:00.117) 0:00:42.838 ********* 2026-03-17 00:50:16.197952 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:50:16.197960 | orchestrator |  "vgs_report": { 2026-03-17 00:50:16.197969 | orchestrator |  "vg": [] 2026-03-17 00:50:16.197977 | orchestrator |  } 2026-03-17 00:50:16.197985 | orchestrator | } 2026-03-17 00:50:16.197998 | orchestrator | 2026-03-17 00:50:16.198007 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-17 00:50:16.198094 | orchestrator | Tuesday 17 March 2026 00:50:15 +0000 (0:00:00.180) 0:00:43.018 ********* 2026-03-17 00:50:16.198105 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.198113 | orchestrator | 2026-03-17 00:50:16.198121 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-17 00:50:16.198129 | orchestrator | Tuesday 17 March 2026 00:50:15 +0000 (0:00:00.139) 0:00:43.158 ********* 2026-03-17 00:50:16.198137 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.198145 | orchestrator | 2026-03-17 00:50:16.198153 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-17 00:50:16.198161 | orchestrator | Tuesday 17 March 2026 00:50:15 +0000 (0:00:00.161) 0:00:43.319 ********* 2026-03-17 00:50:16.198169 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.198177 | orchestrator | 2026-03-17 00:50:16.198185 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-17 00:50:16.198193 | orchestrator | Tuesday 17 March 2026 00:50:16 +0000 (0:00:00.131) 0:00:43.450 ********* 2026-03-17 00:50:16.198201 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:16.198209 | orchestrator | 2026-03-17 00:50:16.198224 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-17 00:50:21.039721 | orchestrator | Tuesday 17 March 2026 00:50:16 +0000 (0:00:00.137) 0:00:43.588 ********* 2026-03-17 00:50:21.039774 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.039780 | orchestrator | 2026-03-17 00:50:21.039786 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-17 00:50:21.039792 | orchestrator | Tuesday 17 March 2026 00:50:16 +0000 (0:00:00.154) 0:00:43.742 ********* 2026-03-17 00:50:21.039799 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.039805 | orchestrator | 2026-03-17 00:50:21.039867 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-17 00:50:21.039874 | orchestrator | Tuesday 17 March 2026 00:50:16 +0000 (0:00:00.354) 0:00:44.097 ********* 2026-03-17 00:50:21.039881 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.039888 | orchestrator | 2026-03-17 00:50:21.039895 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-17 00:50:21.039901 | orchestrator | Tuesday 17 March 2026 00:50:16 +0000 (0:00:00.163) 0:00:44.261 ********* 2026-03-17 00:50:21.039908 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.039914 | orchestrator | 2026-03-17 00:50:21.039920 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-17 00:50:21.039927 | orchestrator | Tuesday 17 March 2026 00:50:16 +0000 (0:00:00.133) 0:00:44.395 ********* 2026-03-17 00:50:21.039943 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.039950 | orchestrator | 2026-03-17 00:50:21.039956 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-17 00:50:21.039963 | orchestrator | Tuesday 17 March 2026 00:50:17 +0000 (0:00:00.122) 0:00:44.518 ********* 2026-03-17 00:50:21.039969 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.039975 | orchestrator | 2026-03-17 00:50:21.039982 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-17 00:50:21.039988 | orchestrator | Tuesday 17 March 2026 00:50:17 +0000 (0:00:00.138) 0:00:44.656 ********* 2026-03-17 00:50:21.039995 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040002 | orchestrator | 2026-03-17 00:50:21.040009 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-17 00:50:21.040016 | orchestrator | Tuesday 17 March 2026 00:50:17 +0000 (0:00:00.158) 0:00:44.815 ********* 2026-03-17 00:50:21.040023 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040030 | orchestrator | 2026-03-17 00:50:21.040037 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-17 00:50:21.040044 | orchestrator | Tuesday 17 March 2026 00:50:17 +0000 (0:00:00.156) 0:00:44.971 ********* 2026-03-17 00:50:21.040051 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040070 | orchestrator | 2026-03-17 00:50:21.040077 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-17 00:50:21.040083 | orchestrator | Tuesday 17 March 2026 00:50:17 +0000 (0:00:00.139) 0:00:45.110 ********* 2026-03-17 00:50:21.040090 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040096 | orchestrator | 2026-03-17 00:50:21.040102 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-17 00:50:21.040108 | orchestrator | Tuesday 17 March 2026 00:50:17 +0000 (0:00:00.153) 0:00:45.264 ********* 2026-03-17 00:50:21.040115 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040123 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:21.040129 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040136 | orchestrator | 2026-03-17 00:50:21.040142 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-17 00:50:21.040149 | orchestrator | Tuesday 17 March 2026 00:50:18 +0000 (0:00:00.177) 0:00:45.441 ********* 2026-03-17 00:50:21.040155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:21.040169 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040175 | orchestrator | 2026-03-17 00:50:21.040182 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-17 00:50:21.040189 | orchestrator | Tuesday 17 March 2026 00:50:18 +0000 (0:00:00.188) 0:00:45.629 ********* 2026-03-17 00:50:21.040195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040202 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:21.040209 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040215 | orchestrator | 2026-03-17 00:50:21.040222 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-17 00:50:21.040228 | orchestrator | Tuesday 17 March 2026 00:50:18 +0000 (0:00:00.155) 0:00:45.785 ********* 2026-03-17 00:50:21.040234 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040242 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:21.040248 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040255 | orchestrator | 2026-03-17 00:50:21.040275 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-17 00:50:21.040282 | orchestrator | Tuesday 17 March 2026 00:50:18 +0000 (0:00:00.399) 0:00:46.184 ********* 2026-03-17 00:50:21.040288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040294 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:21.040301 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040308 | orchestrator | 2026-03-17 00:50:21.040315 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-17 00:50:21.040322 | orchestrator | Tuesday 17 March 2026 00:50:18 +0000 (0:00:00.172) 0:00:46.357 ********* 2026-03-17 00:50:21.040339 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040347 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:21.040353 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040360 | orchestrator | 2026-03-17 00:50:21.040366 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-17 00:50:21.040373 | orchestrator | Tuesday 17 March 2026 00:50:19 +0000 (0:00:00.155) 0:00:46.513 ********* 2026-03-17 00:50:21.040379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040386 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:21.040392 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040398 | orchestrator | 2026-03-17 00:50:21.040405 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-17 00:50:21.040411 | orchestrator | Tuesday 17 March 2026 00:50:19 +0000 (0:00:00.156) 0:00:46.669 ********* 2026-03-17 00:50:21.040417 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040424 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:21.040430 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040437 | orchestrator | 2026-03-17 00:50:21.040443 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-17 00:50:21.040450 | orchestrator | Tuesday 17 March 2026 00:50:19 +0000 (0:00:00.152) 0:00:46.821 ********* 2026-03-17 00:50:21.040455 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:21.040461 | orchestrator | 2026-03-17 00:50:21.040467 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-17 00:50:21.040473 | orchestrator | Tuesday 17 March 2026 00:50:19 +0000 (0:00:00.513) 0:00:47.335 ********* 2026-03-17 00:50:21.040480 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:21.040487 | orchestrator | 2026-03-17 00:50:21.040494 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-17 00:50:21.040500 | orchestrator | Tuesday 17 March 2026 00:50:20 +0000 (0:00:00.521) 0:00:47.857 ********* 2026-03-17 00:50:21.040507 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:50:21.040514 | orchestrator | 2026-03-17 00:50:21.040521 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-17 00:50:21.040528 | orchestrator | Tuesday 17 March 2026 00:50:20 +0000 (0:00:00.168) 0:00:48.025 ********* 2026-03-17 00:50:21.040535 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'vg_name': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'}) 2026-03-17 00:50:21.040543 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'vg_name': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'}) 2026-03-17 00:50:21.040550 | orchestrator | 2026-03-17 00:50:21.040557 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-17 00:50:21.040563 | orchestrator | Tuesday 17 March 2026 00:50:20 +0000 (0:00:00.164) 0:00:48.190 ********* 2026-03-17 00:50:21.040569 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:21.040609 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:21.040620 | orchestrator | 2026-03-17 00:50:21.040627 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-17 00:50:21.040634 | orchestrator | Tuesday 17 March 2026 00:50:20 +0000 (0:00:00.167) 0:00:48.357 ********* 2026-03-17 00:50:21.040640 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:21.040654 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:27.691477 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:27.691538 | orchestrator | 2026-03-17 00:50:27.691546 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-17 00:50:27.691552 | orchestrator | Tuesday 17 March 2026 00:50:21 +0000 (0:00:00.149) 0:00:48.507 ********* 2026-03-17 00:50:27.691558 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'})  2026-03-17 00:50:27.691565 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'})  2026-03-17 00:50:27.691570 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:50:27.691576 | orchestrator | 2026-03-17 00:50:27.691581 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-17 00:50:27.691586 | orchestrator | Tuesday 17 March 2026 00:50:21 +0000 (0:00:00.163) 0:00:48.670 ********* 2026-03-17 00:50:27.691592 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 00:50:27.691597 | orchestrator |  "lvm_report": { 2026-03-17 00:50:27.691604 | orchestrator |  "lv": [ 2026-03-17 00:50:27.691618 | orchestrator |  { 2026-03-17 00:50:27.691624 | orchestrator |  "lv_name": "osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120", 2026-03-17 00:50:27.691630 | orchestrator |  "vg_name": "ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120" 2026-03-17 00:50:27.691635 | orchestrator |  }, 2026-03-17 00:50:27.691640 | orchestrator |  { 2026-03-17 00:50:27.691645 | orchestrator |  "lv_name": "osd-block-ec88a4df-1f79-596d-b281-118c477c78df", 2026-03-17 00:50:27.691651 | orchestrator |  "vg_name": "ceph-ec88a4df-1f79-596d-b281-118c477c78df" 2026-03-17 00:50:27.691656 | orchestrator |  } 2026-03-17 00:50:27.691661 | orchestrator |  ], 2026-03-17 00:50:27.691667 | orchestrator |  "pv": [ 2026-03-17 00:50:27.691672 | orchestrator |  { 2026-03-17 00:50:27.691677 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-17 00:50:27.691682 | orchestrator |  "vg_name": "ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120" 2026-03-17 00:50:27.691688 | orchestrator |  }, 2026-03-17 00:50:27.691693 | orchestrator |  { 2026-03-17 00:50:27.691698 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-17 00:50:27.691703 | orchestrator |  "vg_name": "ceph-ec88a4df-1f79-596d-b281-118c477c78df" 2026-03-17 00:50:27.691709 | orchestrator |  } 2026-03-17 00:50:27.691715 | orchestrator |  ] 2026-03-17 00:50:27.691720 | orchestrator |  } 2026-03-17 00:50:27.691726 | orchestrator | } 2026-03-17 00:50:27.691731 | orchestrator | 2026-03-17 00:50:27.691736 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-17 00:50:27.691741 | orchestrator | 2026-03-17 00:50:27.691747 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-17 00:50:27.691752 | orchestrator | Tuesday 17 March 2026 00:50:21 +0000 (0:00:00.546) 0:00:49.217 ********* 2026-03-17 00:50:27.691757 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-17 00:50:27.691762 | orchestrator | 2026-03-17 00:50:27.691768 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-17 00:50:27.691774 | orchestrator | Tuesday 17 March 2026 00:50:22 +0000 (0:00:00.271) 0:00:49.488 ********* 2026-03-17 00:50:27.691793 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:27.691799 | orchestrator | 2026-03-17 00:50:27.691804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.691809 | orchestrator | Tuesday 17 March 2026 00:50:22 +0000 (0:00:00.265) 0:00:49.754 ********* 2026-03-17 00:50:27.691814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:50:27.691881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:50:27.691887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:50:27.691895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:50:27.691900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:50:27.691905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:50:27.691910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:50:27.691916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:50:27.691921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-17 00:50:27.691926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:50:27.691931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:50:27.691936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:50:27.691941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:50:27.691946 | orchestrator | 2026-03-17 00:50:27.691951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.691956 | orchestrator | Tuesday 17 March 2026 00:50:22 +0000 (0:00:00.462) 0:00:50.216 ********* 2026-03-17 00:50:27.691961 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:27.691966 | orchestrator | 2026-03-17 00:50:27.691972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.691977 | orchestrator | Tuesday 17 March 2026 00:50:23 +0000 (0:00:00.205) 0:00:50.422 ********* 2026-03-17 00:50:27.691982 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:27.691987 | orchestrator | 2026-03-17 00:50:27.691992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692007 | orchestrator | Tuesday 17 March 2026 00:50:23 +0000 (0:00:00.205) 0:00:50.628 ********* 2026-03-17 00:50:27.692012 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:27.692017 | orchestrator | 2026-03-17 00:50:27.692022 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692027 | orchestrator | Tuesday 17 March 2026 00:50:23 +0000 (0:00:00.217) 0:00:50.846 ********* 2026-03-17 00:50:27.692032 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:27.692037 | orchestrator | 2026-03-17 00:50:27.692042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692048 | orchestrator | Tuesday 17 March 2026 00:50:23 +0000 (0:00:00.224) 0:00:51.070 ********* 2026-03-17 00:50:27.692053 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:27.692058 | orchestrator | 2026-03-17 00:50:27.692063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692069 | orchestrator | Tuesday 17 March 2026 00:50:23 +0000 (0:00:00.228) 0:00:51.299 ********* 2026-03-17 00:50:27.692074 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:27.692080 | orchestrator | 2026-03-17 00:50:27.692086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692095 | orchestrator | Tuesday 17 March 2026 00:50:24 +0000 (0:00:00.673) 0:00:51.972 ********* 2026-03-17 00:50:27.692101 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:27.692111 | orchestrator | 2026-03-17 00:50:27.692117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692122 | orchestrator | Tuesday 17 March 2026 00:50:24 +0000 (0:00:00.235) 0:00:52.208 ********* 2026-03-17 00:50:27.692128 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:27.692133 | orchestrator | 2026-03-17 00:50:27.692139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692144 | orchestrator | Tuesday 17 March 2026 00:50:25 +0000 (0:00:00.241) 0:00:52.450 ********* 2026-03-17 00:50:27.692150 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0) 2026-03-17 00:50:27.692156 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0) 2026-03-17 00:50:27.692162 | orchestrator | 2026-03-17 00:50:27.692167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692172 | orchestrator | Tuesday 17 March 2026 00:50:25 +0000 (0:00:00.574) 0:00:53.025 ********* 2026-03-17 00:50:27.692178 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7) 2026-03-17 00:50:27.692183 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7) 2026-03-17 00:50:27.692189 | orchestrator | 2026-03-17 00:50:27.692195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692200 | orchestrator | Tuesday 17 March 2026 00:50:26 +0000 (0:00:00.444) 0:00:53.469 ********* 2026-03-17 00:50:27.692208 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865) 2026-03-17 00:50:27.692213 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865) 2026-03-17 00:50:27.692219 | orchestrator | 2026-03-17 00:50:27.692224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692230 | orchestrator | Tuesday 17 March 2026 00:50:26 +0000 (0:00:00.435) 0:00:53.904 ********* 2026-03-17 00:50:27.692235 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276) 2026-03-17 00:50:27.692241 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276) 2026-03-17 00:50:27.692246 | orchestrator | 2026-03-17 00:50:27.692251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-17 00:50:27.692257 | orchestrator | Tuesday 17 March 2026 00:50:26 +0000 (0:00:00.471) 0:00:54.375 ********* 2026-03-17 00:50:27.692263 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-17 00:50:27.692279 | orchestrator | 2026-03-17 00:50:27.692284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:27.692290 | orchestrator | Tuesday 17 March 2026 00:50:27 +0000 (0:00:00.363) 0:00:54.739 ********* 2026-03-17 00:50:27.692295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-17 00:50:27.692301 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-17 00:50:27.692306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-17 00:50:27.692312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-17 00:50:27.692317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-17 00:50:27.692323 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-17 00:50:27.692328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-17 00:50:27.692334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-17 00:50:27.692339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-17 00:50:27.692348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-17 00:50:27.692354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-17 00:50:27.692363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-17 00:50:36.768287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-17 00:50:36.768354 | orchestrator | 2026-03-17 00:50:36.768365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768374 | orchestrator | Tuesday 17 March 2026 00:50:27 +0000 (0:00:00.431) 0:00:55.171 ********* 2026-03-17 00:50:36.768382 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768390 | orchestrator | 2026-03-17 00:50:36.768398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768406 | orchestrator | Tuesday 17 March 2026 00:50:27 +0000 (0:00:00.206) 0:00:55.377 ********* 2026-03-17 00:50:36.768422 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768430 | orchestrator | 2026-03-17 00:50:36.768437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768445 | orchestrator | Tuesday 17 March 2026 00:50:28 +0000 (0:00:00.285) 0:00:55.662 ********* 2026-03-17 00:50:36.768453 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768461 | orchestrator | 2026-03-17 00:50:36.768469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768486 | orchestrator | Tuesday 17 March 2026 00:50:28 +0000 (0:00:00.691) 0:00:56.353 ********* 2026-03-17 00:50:36.768494 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768502 | orchestrator | 2026-03-17 00:50:36.768510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768518 | orchestrator | Tuesday 17 March 2026 00:50:29 +0000 (0:00:00.236) 0:00:56.590 ********* 2026-03-17 00:50:36.768526 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768534 | orchestrator | 2026-03-17 00:50:36.768541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768549 | orchestrator | Tuesday 17 March 2026 00:50:29 +0000 (0:00:00.233) 0:00:56.824 ********* 2026-03-17 00:50:36.768557 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768564 | orchestrator | 2026-03-17 00:50:36.768572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768580 | orchestrator | Tuesday 17 March 2026 00:50:29 +0000 (0:00:00.199) 0:00:57.023 ********* 2026-03-17 00:50:36.768588 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768595 | orchestrator | 2026-03-17 00:50:36.768603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768611 | orchestrator | Tuesday 17 March 2026 00:50:29 +0000 (0:00:00.227) 0:00:57.251 ********* 2026-03-17 00:50:36.768619 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768627 | orchestrator | 2026-03-17 00:50:36.768635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768649 | orchestrator | Tuesday 17 March 2026 00:50:30 +0000 (0:00:00.251) 0:00:57.502 ********* 2026-03-17 00:50:36.768657 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-17 00:50:36.768665 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-17 00:50:36.768673 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-17 00:50:36.768681 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-17 00:50:36.768689 | orchestrator | 2026-03-17 00:50:36.768697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768705 | orchestrator | Tuesday 17 March 2026 00:50:30 +0000 (0:00:00.755) 0:00:58.258 ********* 2026-03-17 00:50:36.768713 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768721 | orchestrator | 2026-03-17 00:50:36.768729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768748 | orchestrator | Tuesday 17 March 2026 00:50:31 +0000 (0:00:00.206) 0:00:58.464 ********* 2026-03-17 00:50:36.768756 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768764 | orchestrator | 2026-03-17 00:50:36.768772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768780 | orchestrator | Tuesday 17 March 2026 00:50:31 +0000 (0:00:00.224) 0:00:58.689 ********* 2026-03-17 00:50:36.768788 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768796 | orchestrator | 2026-03-17 00:50:36.768803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-17 00:50:36.768811 | orchestrator | Tuesday 17 March 2026 00:50:31 +0000 (0:00:00.226) 0:00:58.915 ********* 2026-03-17 00:50:36.768819 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768827 | orchestrator | 2026-03-17 00:50:36.768883 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-17 00:50:36.768893 | orchestrator | Tuesday 17 March 2026 00:50:31 +0000 (0:00:00.201) 0:00:59.117 ********* 2026-03-17 00:50:36.768902 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.768911 | orchestrator | 2026-03-17 00:50:36.768919 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-17 00:50:36.768928 | orchestrator | Tuesday 17 March 2026 00:50:31 +0000 (0:00:00.148) 0:00:59.266 ********* 2026-03-17 00:50:36.768937 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50c44467-b3f7-539a-99b7-df2211d1583b'}}) 2026-03-17 00:50:36.768946 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9465b490-647b-5adb-8e2e-a5649c4bc673'}}) 2026-03-17 00:50:36.768954 | orchestrator | 2026-03-17 00:50:36.768963 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-17 00:50:36.768972 | orchestrator | Tuesday 17 March 2026 00:50:32 +0000 (0:00:00.433) 0:00:59.700 ********* 2026-03-17 00:50:36.768981 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'}) 2026-03-17 00:50:36.768990 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'}) 2026-03-17 00:50:36.768998 | orchestrator | 2026-03-17 00:50:36.769007 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-17 00:50:36.769027 | orchestrator | Tuesday 17 March 2026 00:50:34 +0000 (0:00:01.815) 0:01:01.515 ********* 2026-03-17 00:50:36.769035 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:36.769045 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:36.769053 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.769061 | orchestrator | 2026-03-17 00:50:36.769068 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-17 00:50:36.769077 | orchestrator | Tuesday 17 March 2026 00:50:34 +0000 (0:00:00.174) 0:01:01.689 ********* 2026-03-17 00:50:36.769084 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'}) 2026-03-17 00:50:36.769092 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'}) 2026-03-17 00:50:36.769101 | orchestrator | 2026-03-17 00:50:36.769109 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-17 00:50:36.769118 | orchestrator | Tuesday 17 March 2026 00:50:35 +0000 (0:00:01.220) 0:01:02.909 ********* 2026-03-17 00:50:36.769126 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:36.769140 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:36.769145 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.769150 | orchestrator | 2026-03-17 00:50:36.769155 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-17 00:50:36.769159 | orchestrator | Tuesday 17 March 2026 00:50:35 +0000 (0:00:00.165) 0:01:03.075 ********* 2026-03-17 00:50:36.769164 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.769168 | orchestrator | 2026-03-17 00:50:36.769174 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-17 00:50:36.769182 | orchestrator | Tuesday 17 March 2026 00:50:35 +0000 (0:00:00.136) 0:01:03.212 ********* 2026-03-17 00:50:36.769190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:36.769198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:36.769205 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.769213 | orchestrator | 2026-03-17 00:50:36.769221 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-17 00:50:36.769228 | orchestrator | Tuesday 17 March 2026 00:50:35 +0000 (0:00:00.162) 0:01:03.374 ********* 2026-03-17 00:50:36.769236 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.769244 | orchestrator | 2026-03-17 00:50:36.769252 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-17 00:50:36.769265 | orchestrator | Tuesday 17 March 2026 00:50:36 +0000 (0:00:00.144) 0:01:03.518 ********* 2026-03-17 00:50:36.769273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:36.769280 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:36.769289 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.769294 | orchestrator | 2026-03-17 00:50:36.769299 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-17 00:50:36.769303 | orchestrator | Tuesday 17 March 2026 00:50:36 +0000 (0:00:00.156) 0:01:03.675 ********* 2026-03-17 00:50:36.769308 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.769313 | orchestrator | 2026-03-17 00:50:36.769317 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-17 00:50:36.769322 | orchestrator | Tuesday 17 March 2026 00:50:36 +0000 (0:00:00.137) 0:01:03.812 ********* 2026-03-17 00:50:36.769326 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:36.769331 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:36.769336 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:36.769340 | orchestrator | 2026-03-17 00:50:36.769345 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-17 00:50:36.769349 | orchestrator | Tuesday 17 March 2026 00:50:36 +0000 (0:00:00.156) 0:01:03.969 ********* 2026-03-17 00:50:36.769354 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:36.769359 | orchestrator | 2026-03-17 00:50:36.769363 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-17 00:50:36.769368 | orchestrator | Tuesday 17 March 2026 00:50:36 +0000 (0:00:00.135) 0:01:04.105 ********* 2026-03-17 00:50:36.769377 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:42.516047 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:42.516098 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516105 | orchestrator | 2026-03-17 00:50:42.516110 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-17 00:50:42.516114 | orchestrator | Tuesday 17 March 2026 00:50:37 +0000 (0:00:00.361) 0:01:04.466 ********* 2026-03-17 00:50:42.516119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:42.516123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:42.516127 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516131 | orchestrator | 2026-03-17 00:50:42.516142 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-17 00:50:42.516146 | orchestrator | Tuesday 17 March 2026 00:50:37 +0000 (0:00:00.195) 0:01:04.662 ********* 2026-03-17 00:50:42.516150 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:42.516154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:42.516158 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516161 | orchestrator | 2026-03-17 00:50:42.516165 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-17 00:50:42.516169 | orchestrator | Tuesday 17 March 2026 00:50:37 +0000 (0:00:00.196) 0:01:04.859 ********* 2026-03-17 00:50:42.516173 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516177 | orchestrator | 2026-03-17 00:50:42.516181 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-17 00:50:42.516185 | orchestrator | Tuesday 17 March 2026 00:50:37 +0000 (0:00:00.144) 0:01:05.003 ********* 2026-03-17 00:50:42.516189 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516193 | orchestrator | 2026-03-17 00:50:42.516196 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-17 00:50:42.516200 | orchestrator | Tuesday 17 March 2026 00:50:37 +0000 (0:00:00.132) 0:01:05.135 ********* 2026-03-17 00:50:42.516204 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516208 | orchestrator | 2026-03-17 00:50:42.516212 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-17 00:50:42.516216 | orchestrator | Tuesday 17 March 2026 00:50:37 +0000 (0:00:00.132) 0:01:05.268 ********* 2026-03-17 00:50:42.516221 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:50:42.516228 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-17 00:50:42.516239 | orchestrator | } 2026-03-17 00:50:42.516246 | orchestrator | 2026-03-17 00:50:42.516252 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-17 00:50:42.516258 | orchestrator | Tuesday 17 March 2026 00:50:37 +0000 (0:00:00.127) 0:01:05.396 ********* 2026-03-17 00:50:42.516265 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:50:42.516271 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-17 00:50:42.516277 | orchestrator | } 2026-03-17 00:50:42.516284 | orchestrator | 2026-03-17 00:50:42.516291 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-17 00:50:42.516297 | orchestrator | Tuesday 17 March 2026 00:50:38 +0000 (0:00:00.113) 0:01:05.509 ********* 2026-03-17 00:50:42.516303 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:50:42.516310 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-17 00:50:42.516316 | orchestrator | } 2026-03-17 00:50:42.516323 | orchestrator | 2026-03-17 00:50:42.516329 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-17 00:50:42.516335 | orchestrator | Tuesday 17 March 2026 00:50:38 +0000 (0:00:00.118) 0:01:05.627 ********* 2026-03-17 00:50:42.516355 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:42.516362 | orchestrator | 2026-03-17 00:50:42.516368 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-17 00:50:42.516375 | orchestrator | Tuesday 17 March 2026 00:50:38 +0000 (0:00:00.497) 0:01:06.124 ********* 2026-03-17 00:50:42.516381 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:42.516387 | orchestrator | 2026-03-17 00:50:42.516394 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-17 00:50:42.516400 | orchestrator | Tuesday 17 March 2026 00:50:39 +0000 (0:00:00.462) 0:01:06.586 ********* 2026-03-17 00:50:42.516407 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:42.516413 | orchestrator | 2026-03-17 00:50:42.516420 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-17 00:50:42.516427 | orchestrator | Tuesday 17 March 2026 00:50:39 +0000 (0:00:00.475) 0:01:07.062 ********* 2026-03-17 00:50:42.516433 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:42.516439 | orchestrator | 2026-03-17 00:50:42.516446 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-17 00:50:42.516452 | orchestrator | Tuesday 17 March 2026 00:50:39 +0000 (0:00:00.279) 0:01:07.342 ********* 2026-03-17 00:50:42.516459 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516465 | orchestrator | 2026-03-17 00:50:42.516472 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-17 00:50:42.516478 | orchestrator | Tuesday 17 March 2026 00:50:40 +0000 (0:00:00.096) 0:01:07.439 ********* 2026-03-17 00:50:42.516485 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516491 | orchestrator | 2026-03-17 00:50:42.516498 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-17 00:50:42.516504 | orchestrator | Tuesday 17 March 2026 00:50:40 +0000 (0:00:00.107) 0:01:07.547 ********* 2026-03-17 00:50:42.516511 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:50:42.516518 | orchestrator |  "vgs_report": { 2026-03-17 00:50:42.516525 | orchestrator |  "vg": [] 2026-03-17 00:50:42.516541 | orchestrator |  } 2026-03-17 00:50:42.516548 | orchestrator | } 2026-03-17 00:50:42.516555 | orchestrator | 2026-03-17 00:50:42.516570 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-17 00:50:42.516582 | orchestrator | Tuesday 17 March 2026 00:50:40 +0000 (0:00:00.144) 0:01:07.691 ********* 2026-03-17 00:50:42.516589 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516596 | orchestrator | 2026-03-17 00:50:42.516602 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-17 00:50:42.516608 | orchestrator | Tuesday 17 March 2026 00:50:40 +0000 (0:00:00.127) 0:01:07.819 ********* 2026-03-17 00:50:42.516615 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516621 | orchestrator | 2026-03-17 00:50:42.516628 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-17 00:50:42.516634 | orchestrator | Tuesday 17 March 2026 00:50:40 +0000 (0:00:00.140) 0:01:07.960 ********* 2026-03-17 00:50:42.516641 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516647 | orchestrator | 2026-03-17 00:50:42.516654 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-17 00:50:42.516665 | orchestrator | Tuesday 17 March 2026 00:50:40 +0000 (0:00:00.121) 0:01:08.082 ********* 2026-03-17 00:50:42.516672 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516679 | orchestrator | 2026-03-17 00:50:42.516686 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-17 00:50:42.516693 | orchestrator | Tuesday 17 March 2026 00:50:40 +0000 (0:00:00.106) 0:01:08.188 ********* 2026-03-17 00:50:42.516699 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516706 | orchestrator | 2026-03-17 00:50:42.516713 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-17 00:50:42.516720 | orchestrator | Tuesday 17 March 2026 00:50:40 +0000 (0:00:00.125) 0:01:08.313 ********* 2026-03-17 00:50:42.516726 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516738 | orchestrator | 2026-03-17 00:50:42.516745 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-17 00:50:42.516752 | orchestrator | Tuesday 17 March 2026 00:50:41 +0000 (0:00:00.119) 0:01:08.433 ********* 2026-03-17 00:50:42.516759 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516765 | orchestrator | 2026-03-17 00:50:42.516772 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-17 00:50:42.516779 | orchestrator | Tuesday 17 March 2026 00:50:41 +0000 (0:00:00.123) 0:01:08.557 ********* 2026-03-17 00:50:42.516786 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516793 | orchestrator | 2026-03-17 00:50:42.516799 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-17 00:50:42.516806 | orchestrator | Tuesday 17 March 2026 00:50:41 +0000 (0:00:00.126) 0:01:08.683 ********* 2026-03-17 00:50:42.516813 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516820 | orchestrator | 2026-03-17 00:50:42.516827 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-17 00:50:42.516834 | orchestrator | Tuesday 17 March 2026 00:50:41 +0000 (0:00:00.263) 0:01:08.946 ********* 2026-03-17 00:50:42.516841 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516873 | orchestrator | 2026-03-17 00:50:42.516880 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-17 00:50:42.516887 | orchestrator | Tuesday 17 March 2026 00:50:41 +0000 (0:00:00.128) 0:01:09.074 ********* 2026-03-17 00:50:42.516894 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516901 | orchestrator | 2026-03-17 00:50:42.516907 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-17 00:50:42.516914 | orchestrator | Tuesday 17 March 2026 00:50:41 +0000 (0:00:00.135) 0:01:09.210 ********* 2026-03-17 00:50:42.516921 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516927 | orchestrator | 2026-03-17 00:50:42.516933 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-17 00:50:42.516940 | orchestrator | Tuesday 17 March 2026 00:50:41 +0000 (0:00:00.115) 0:01:09.325 ********* 2026-03-17 00:50:42.516947 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516953 | orchestrator | 2026-03-17 00:50:42.516960 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-17 00:50:42.516966 | orchestrator | Tuesday 17 March 2026 00:50:42 +0000 (0:00:00.114) 0:01:09.440 ********* 2026-03-17 00:50:42.516972 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.516979 | orchestrator | 2026-03-17 00:50:42.516985 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-17 00:50:42.516991 | orchestrator | Tuesday 17 March 2026 00:50:42 +0000 (0:00:00.122) 0:01:09.562 ********* 2026-03-17 00:50:42.516998 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:42.517005 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:42.517011 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.517018 | orchestrator | 2026-03-17 00:50:42.517025 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-17 00:50:42.517031 | orchestrator | Tuesday 17 March 2026 00:50:42 +0000 (0:00:00.137) 0:01:09.699 ********* 2026-03-17 00:50:42.517038 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:42.517044 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:42.517051 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:42.517057 | orchestrator | 2026-03-17 00:50:42.517064 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-17 00:50:42.517076 | orchestrator | Tuesday 17 March 2026 00:50:42 +0000 (0:00:00.151) 0:01:09.851 ********* 2026-03-17 00:50:42.517088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:45.332318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:45.332389 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:45.332395 | orchestrator | 2026-03-17 00:50:45.332401 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-17 00:50:45.332407 | orchestrator | Tuesday 17 March 2026 00:50:42 +0000 (0:00:00.131) 0:01:09.983 ********* 2026-03-17 00:50:45.332412 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:45.332428 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:45.332432 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:45.332437 | orchestrator | 2026-03-17 00:50:45.332441 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-17 00:50:45.332446 | orchestrator | Tuesday 17 March 2026 00:50:42 +0000 (0:00:00.144) 0:01:10.127 ********* 2026-03-17 00:50:45.332450 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:45.332454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:45.332458 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:45.332463 | orchestrator | 2026-03-17 00:50:45.332467 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-17 00:50:45.332471 | orchestrator | Tuesday 17 March 2026 00:50:42 +0000 (0:00:00.150) 0:01:10.278 ********* 2026-03-17 00:50:45.332475 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:45.332479 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:45.332483 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:45.332487 | orchestrator | 2026-03-17 00:50:45.332491 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-17 00:50:45.332495 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.145) 0:01:10.423 ********* 2026-03-17 00:50:45.332499 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:45.332503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:45.332507 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:45.332511 | orchestrator | 2026-03-17 00:50:45.332515 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-17 00:50:45.332519 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.275) 0:01:10.698 ********* 2026-03-17 00:50:45.332523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:45.332527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:45.332531 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:45.332548 | orchestrator | 2026-03-17 00:50:45.332552 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-17 00:50:45.332556 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.146) 0:01:10.845 ********* 2026-03-17 00:50:45.332560 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:45.332565 | orchestrator | 2026-03-17 00:50:45.332570 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-17 00:50:45.332574 | orchestrator | Tuesday 17 March 2026 00:50:43 +0000 (0:00:00.507) 0:01:11.352 ********* 2026-03-17 00:50:45.332578 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:45.332582 | orchestrator | 2026-03-17 00:50:45.332586 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-17 00:50:45.332590 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.561) 0:01:11.914 ********* 2026-03-17 00:50:45.332594 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:50:45.332598 | orchestrator | 2026-03-17 00:50:45.332602 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-17 00:50:45.332606 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.158) 0:01:12.072 ********* 2026-03-17 00:50:45.332611 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'vg_name': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'}) 2026-03-17 00:50:45.332616 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'vg_name': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'}) 2026-03-17 00:50:45.332620 | orchestrator | 2026-03-17 00:50:45.332624 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-17 00:50:45.332628 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.150) 0:01:12.223 ********* 2026-03-17 00:50:45.332642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:45.332647 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:45.332651 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:45.332655 | orchestrator | 2026-03-17 00:50:45.332659 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-17 00:50:45.332663 | orchestrator | Tuesday 17 March 2026 00:50:44 +0000 (0:00:00.126) 0:01:12.349 ********* 2026-03-17 00:50:45.332667 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:45.332671 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:45.332676 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:45.332680 | orchestrator | 2026-03-17 00:50:45.332684 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-17 00:50:45.332688 | orchestrator | Tuesday 17 March 2026 00:50:45 +0000 (0:00:00.122) 0:01:12.472 ********* 2026-03-17 00:50:45.332692 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'})  2026-03-17 00:50:45.332696 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'})  2026-03-17 00:50:45.332700 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:50:45.332704 | orchestrator | 2026-03-17 00:50:45.332708 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-17 00:50:45.332712 | orchestrator | Tuesday 17 March 2026 00:50:45 +0000 (0:00:00.121) 0:01:12.593 ********* 2026-03-17 00:50:45.332716 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 00:50:45.332720 | orchestrator |  "lvm_report": { 2026-03-17 00:50:45.332725 | orchestrator |  "lv": [ 2026-03-17 00:50:45.332736 | orchestrator |  { 2026-03-17 00:50:45.332741 | orchestrator |  "lv_name": "osd-block-50c44467-b3f7-539a-99b7-df2211d1583b", 2026-03-17 00:50:45.332745 | orchestrator |  "vg_name": "ceph-50c44467-b3f7-539a-99b7-df2211d1583b" 2026-03-17 00:50:45.332749 | orchestrator |  }, 2026-03-17 00:50:45.332753 | orchestrator |  { 2026-03-17 00:50:45.332757 | orchestrator |  "lv_name": "osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673", 2026-03-17 00:50:45.332761 | orchestrator |  "vg_name": "ceph-9465b490-647b-5adb-8e2e-a5649c4bc673" 2026-03-17 00:50:45.332765 | orchestrator |  } 2026-03-17 00:50:45.332769 | orchestrator |  ], 2026-03-17 00:50:45.332774 | orchestrator |  "pv": [ 2026-03-17 00:50:45.332778 | orchestrator |  { 2026-03-17 00:50:45.332782 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-17 00:50:45.332786 | orchestrator |  "vg_name": "ceph-50c44467-b3f7-539a-99b7-df2211d1583b" 2026-03-17 00:50:45.332790 | orchestrator |  }, 2026-03-17 00:50:45.332794 | orchestrator |  { 2026-03-17 00:50:45.332798 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-17 00:50:45.332802 | orchestrator |  "vg_name": "ceph-9465b490-647b-5adb-8e2e-a5649c4bc673" 2026-03-17 00:50:45.332806 | orchestrator |  } 2026-03-17 00:50:45.332810 | orchestrator |  ] 2026-03-17 00:50:45.332814 | orchestrator |  } 2026-03-17 00:50:45.332818 | orchestrator | } 2026-03-17 00:50:45.332823 | orchestrator | 2026-03-17 00:50:45.332827 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:50:45.332831 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-17 00:50:45.332835 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-17 00:50:45.332839 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-17 00:50:45.332843 | orchestrator | 2026-03-17 00:50:45.332847 | orchestrator | 2026-03-17 00:50:45.332873 | orchestrator | 2026-03-17 00:50:45.332882 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:50:45.332887 | orchestrator | Tuesday 17 March 2026 00:50:45 +0000 (0:00:00.118) 0:01:12.712 ********* 2026-03-17 00:50:45.332891 | orchestrator | =============================================================================== 2026-03-17 00:50:45.332896 | orchestrator | Create block VGs -------------------------------------------------------- 5.51s 2026-03-17 00:50:45.332900 | orchestrator | Create block LVs -------------------------------------------------------- 3.83s 2026-03-17 00:50:45.332905 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.87s 2026-03-17 00:50:45.332909 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2026-03-17 00:50:45.332914 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2026-03-17 00:50:45.332918 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.54s 2026-03-17 00:50:45.332923 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2026-03-17 00:50:45.332927 | orchestrator | Add known partitions to the list of available block devices ------------- 1.48s 2026-03-17 00:50:45.332934 | orchestrator | Add known links to the list of available block devices ------------------ 1.30s 2026-03-17 00:50:45.578094 | orchestrator | Print LVM report data --------------------------------------------------- 1.00s 2026-03-17 00:50:45.578167 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-17 00:50:45.578173 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-03-17 00:50:45.578178 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.87s 2026-03-17 00:50:45.578182 | orchestrator | Add known links to the list of available block devices ------------------ 0.81s 2026-03-17 00:50:45.578203 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-03-17 00:50:45.578210 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-03-17 00:50:45.578229 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-03-17 00:50:45.578240 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2026-03-17 00:50:45.578246 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.70s 2026-03-17 00:50:45.578252 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-03-17 00:50:56.866360 | orchestrator | 2026-03-17 00:50:56 | INFO  | Prepare task for execution of facts. 2026-03-17 00:50:56.942294 | orchestrator | 2026-03-17 00:50:56 | INFO  | Task 43d8ee3d-108a-4648-8501-72c07551b180 (facts) was prepared for execution. 2026-03-17 00:50:56.942382 | orchestrator | 2026-03-17 00:50:56 | INFO  | It takes a moment until task 43d8ee3d-108a-4648-8501-72c07551b180 (facts) has been started and output is visible here. 2026-03-17 00:51:08.721167 | orchestrator | 2026-03-17 00:51:08.721220 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-17 00:51:08.721227 | orchestrator | 2026-03-17 00:51:08.721231 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-17 00:51:08.721236 | orchestrator | Tuesday 17 March 2026 00:50:59 +0000 (0:00:00.327) 0:00:00.327 ********* 2026-03-17 00:51:08.721240 | orchestrator | ok: [testbed-manager] 2026-03-17 00:51:08.721244 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:51:08.721248 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:51:08.721252 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:51:08.721256 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:51:08.721260 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:51:08.721263 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:51:08.721267 | orchestrator | 2026-03-17 00:51:08.721271 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-17 00:51:08.721275 | orchestrator | Tuesday 17 March 2026 00:51:01 +0000 (0:00:01.351) 0:00:01.678 ********* 2026-03-17 00:51:08.721279 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:08.721283 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:51:08.721287 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:51:08.721291 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:51:08.721295 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:08.721298 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:08.721302 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:08.721306 | orchestrator | 2026-03-17 00:51:08.721310 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-17 00:51:08.721314 | orchestrator | 2026-03-17 00:51:08.721318 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-17 00:51:08.721321 | orchestrator | Tuesday 17 March 2026 00:51:02 +0000 (0:00:01.078) 0:00:02.757 ********* 2026-03-17 00:51:08.721325 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:51:08.721329 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:51:08.721333 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:51:08.721337 | orchestrator | ok: [testbed-manager] 2026-03-17 00:51:08.721341 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:51:08.721344 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:51:08.721348 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:51:08.721352 | orchestrator | 2026-03-17 00:51:08.721356 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-17 00:51:08.721360 | orchestrator | 2026-03-17 00:51:08.721364 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-17 00:51:08.721368 | orchestrator | Tuesday 17 March 2026 00:51:07 +0000 (0:00:05.523) 0:00:08.280 ********* 2026-03-17 00:51:08.721371 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:08.721375 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:51:08.721391 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:51:08.721395 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:51:08.721398 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:51:08.721402 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:51:08.721406 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:51:08.721410 | orchestrator | 2026-03-17 00:51:08.721414 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:51:08.721418 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:51:08.721422 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:51:08.721426 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:51:08.721430 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:51:08.721433 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:51:08.721437 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:51:08.721441 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 00:51:08.721445 | orchestrator | 2026-03-17 00:51:08.721449 | orchestrator | 2026-03-17 00:51:08.721453 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:51:08.721457 | orchestrator | Tuesday 17 March 2026 00:51:08 +0000 (0:00:00.524) 0:00:08.804 ********* 2026-03-17 00:51:08.721460 | orchestrator | =============================================================================== 2026-03-17 00:51:08.721464 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.52s 2026-03-17 00:51:08.721468 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.35s 2026-03-17 00:51:08.721478 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2026-03-17 00:51:08.721482 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-03-17 00:51:20.065264 | orchestrator | 2026-03-17 00:51:20 | INFO  | Prepare task for execution of frr. 2026-03-17 00:51:20.151475 | orchestrator | 2026-03-17 00:51:20 | INFO  | Task d9c95f87-5fbe-49ac-af45-42b703c4c37a (frr) was prepared for execution. 2026-03-17 00:51:20.151566 | orchestrator | 2026-03-17 00:51:20 | INFO  | It takes a moment until task d9c95f87-5fbe-49ac-af45-42b703c4c37a (frr) has been started and output is visible here. 2026-03-17 00:51:42.625672 | orchestrator | 2026-03-17 00:51:42.625775 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-17 00:51:42.625791 | orchestrator | 2026-03-17 00:51:42.625802 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-17 00:51:42.625813 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:00.293) 0:00:00.293 ********* 2026-03-17 00:51:42.625823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:51:42.625849 | orchestrator | 2026-03-17 00:51:42.625859 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-17 00:51:42.625877 | orchestrator | Tuesday 17 March 2026 00:51:23 +0000 (0:00:00.207) 0:00:00.501 ********* 2026-03-17 00:51:42.625887 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:42.625897 | orchestrator | 2026-03-17 00:51:42.625906 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-17 00:51:42.625983 | orchestrator | Tuesday 17 March 2026 00:51:24 +0000 (0:00:01.374) 0:00:01.876 ********* 2026-03-17 00:51:42.625996 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:42.626006 | orchestrator | 2026-03-17 00:51:42.626078 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-17 00:51:42.626091 | orchestrator | Tuesday 17 March 2026 00:51:33 +0000 (0:00:08.604) 0:00:10.480 ********* 2026-03-17 00:51:42.626103 | orchestrator | ok: [testbed-manager] 2026-03-17 00:51:42.626110 | orchestrator | 2026-03-17 00:51:42.626117 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-17 00:51:42.626123 | orchestrator | Tuesday 17 March 2026 00:51:34 +0000 (0:00:00.918) 0:00:11.399 ********* 2026-03-17 00:51:42.626130 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:42.626136 | orchestrator | 2026-03-17 00:51:42.626142 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-17 00:51:42.626149 | orchestrator | Tuesday 17 March 2026 00:51:35 +0000 (0:00:00.849) 0:00:12.248 ********* 2026-03-17 00:51:42.626155 | orchestrator | ok: [testbed-manager] 2026-03-17 00:51:42.626161 | orchestrator | 2026-03-17 00:51:42.626168 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-17 00:51:42.626174 | orchestrator | Tuesday 17 March 2026 00:51:36 +0000 (0:00:01.099) 0:00:13.348 ********* 2026-03-17 00:51:42.626181 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:42.626187 | orchestrator | 2026-03-17 00:51:42.626193 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-17 00:51:42.626200 | orchestrator | Tuesday 17 March 2026 00:51:36 +0000 (0:00:00.150) 0:00:13.498 ********* 2026-03-17 00:51:42.626206 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:42.626212 | orchestrator | 2026-03-17 00:51:42.626218 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-17 00:51:42.626226 | orchestrator | Tuesday 17 March 2026 00:51:36 +0000 (0:00:00.235) 0:00:13.734 ********* 2026-03-17 00:51:42.626233 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:42.626240 | orchestrator | 2026-03-17 00:51:42.626247 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-17 00:51:42.626255 | orchestrator | Tuesday 17 March 2026 00:51:36 +0000 (0:00:00.149) 0:00:13.883 ********* 2026-03-17 00:51:42.626262 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:42.626269 | orchestrator | 2026-03-17 00:51:42.626276 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-17 00:51:42.626283 | orchestrator | Tuesday 17 March 2026 00:51:37 +0000 (0:00:00.129) 0:00:14.012 ********* 2026-03-17 00:51:42.626290 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:51:42.626297 | orchestrator | 2026-03-17 00:51:42.626304 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-17 00:51:42.626312 | orchestrator | Tuesday 17 March 2026 00:51:37 +0000 (0:00:00.142) 0:00:14.155 ********* 2026-03-17 00:51:42.626319 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:42.626325 | orchestrator | 2026-03-17 00:51:42.626333 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-17 00:51:42.626340 | orchestrator | Tuesday 17 March 2026 00:51:38 +0000 (0:00:00.847) 0:00:15.002 ********* 2026-03-17 00:51:42.626346 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-17 00:51:42.626353 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-17 00:51:42.626362 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-17 00:51:42.626369 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-17 00:51:42.626376 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-17 00:51:42.626392 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-17 00:51:42.626416 | orchestrator | 2026-03-17 00:51:42.626427 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-17 00:51:42.626451 | orchestrator | Tuesday 17 March 2026 00:51:39 +0000 (0:00:01.922) 0:00:16.925 ********* 2026-03-17 00:51:42.626462 | orchestrator | ok: [testbed-manager] 2026-03-17 00:51:42.626472 | orchestrator | 2026-03-17 00:51:42.626484 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-17 00:51:42.626491 | orchestrator | Tuesday 17 March 2026 00:51:41 +0000 (0:00:01.114) 0:00:18.039 ********* 2026-03-17 00:51:42.626499 | orchestrator | changed: [testbed-manager] 2026-03-17 00:51:42.626505 | orchestrator | 2026-03-17 00:51:42.626513 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:51:42.626521 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-17 00:51:42.626528 | orchestrator | 2026-03-17 00:51:42.626535 | orchestrator | 2026-03-17 00:51:42.626561 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:51:42.626572 | orchestrator | Tuesday 17 March 2026 00:51:42 +0000 (0:00:01.290) 0:00:19.330 ********* 2026-03-17 00:51:42.626582 | orchestrator | =============================================================================== 2026-03-17 00:51:42.626592 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.60s 2026-03-17 00:51:42.626601 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.92s 2026-03-17 00:51:42.626611 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.37s 2026-03-17 00:51:42.626620 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.29s 2026-03-17 00:51:42.626630 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.11s 2026-03-17 00:51:42.626641 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.10s 2026-03-17 00:51:42.626653 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.92s 2026-03-17 00:51:42.626659 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.85s 2026-03-17 00:51:42.626665 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.85s 2026-03-17 00:51:42.626672 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.24s 2026-03-17 00:51:42.626678 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2026-03-17 00:51:42.626684 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.15s 2026-03-17 00:51:42.626690 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.15s 2026-03-17 00:51:42.626696 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-03-17 00:51:42.626703 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-03-17 00:51:42.731465 | orchestrator | 2026-03-17 00:51:42.732281 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Mar 17 00:51:42 UTC 2026 2026-03-17 00:51:42.732320 | orchestrator | 2026-03-17 00:51:43.746255 | orchestrator | 2026-03-17 00:51:43 | INFO  | Collection nutshell is prepared for execution 2026-03-17 00:51:43.846446 | orchestrator | 2026-03-17 00:51:43 | INFO  | A [0] - dotfiles 2026-03-17 00:51:53.871758 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [0] - homer 2026-03-17 00:51:53.871823 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [0] - netdata 2026-03-17 00:51:53.872336 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [0] - openstackclient 2026-03-17 00:51:53.872587 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [0] - phpmyadmin 2026-03-17 00:51:53.872853 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [0] - common 2026-03-17 00:51:53.877059 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [1] -- loadbalancer 2026-03-17 00:51:53.877432 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [2] --- opensearch 2026-03-17 00:51:53.877643 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [2] --- mariadb-ng 2026-03-17 00:51:53.878053 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [3] ---- horizon 2026-03-17 00:51:53.878438 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [3] ---- keystone 2026-03-17 00:51:53.878987 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [4] ----- neutron 2026-03-17 00:51:53.879267 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [5] ------ wait-for-nova 2026-03-17 00:51:53.879280 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [6] ------- octavia 2026-03-17 00:51:53.880915 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [4] ----- barbican 2026-03-17 00:51:53.880939 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [4] ----- designate 2026-03-17 00:51:53.881160 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [4] ----- ironic 2026-03-17 00:51:53.881488 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [4] ----- placement 2026-03-17 00:51:53.881901 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [4] ----- magnum 2026-03-17 00:51:53.883275 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [1] -- openvswitch 2026-03-17 00:51:53.883402 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [2] --- ovn 2026-03-17 00:51:53.883677 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [1] -- memcached 2026-03-17 00:51:53.884414 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [1] -- redis 2026-03-17 00:51:53.884449 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [1] -- rabbitmq-ng 2026-03-17 00:51:53.884457 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [0] - kubernetes 2026-03-17 00:51:53.886785 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [1] -- kubeconfig 2026-03-17 00:51:53.886841 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [1] -- copy-kubeconfig 2026-03-17 00:51:53.887316 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [0] - ceph 2026-03-17 00:51:53.889167 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [1] -- ceph-pools 2026-03-17 00:51:53.889211 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [2] --- copy-ceph-keys 2026-03-17 00:51:53.889314 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [3] ---- cephclient 2026-03-17 00:51:53.889323 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-17 00:51:53.889800 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [4] ----- wait-for-keystone 2026-03-17 00:51:53.889902 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-17 00:51:53.889913 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [5] ------ glance 2026-03-17 00:51:53.890244 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [5] ------ cinder 2026-03-17 00:51:53.890355 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [5] ------ nova 2026-03-17 00:51:53.890947 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [4] ----- prometheus 2026-03-17 00:51:53.891000 | orchestrator | 2026-03-17 00:51:53 | INFO  | A [5] ------ grafana 2026-03-17 00:51:54.070859 | orchestrator | 2026-03-17 00:51:54 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-17 00:51:54.070907 | orchestrator | 2026-03-17 00:51:54 | INFO  | Tasks are running in the background 2026-03-17 00:51:55.917205 | orchestrator | 2026-03-17 00:51:55 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-17 00:51:58.137059 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task ef824fdf-f02f-406a-9040-c3d7fa04f0bf is in state STARTED 2026-03-17 00:51:58.137345 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:51:58.139054 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:51:58.142196 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:51:58.142877 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:51:58.143704 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:51:58.146240 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 1d25fb9e-a092-4f1d-bf4d-f9d62b6ba4e4 is in state STARTED 2026-03-17 00:51:58.147173 | orchestrator | 2026-03-17 00:51:58 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:51:58.147203 | orchestrator | 2026-03-17 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:01.194880 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task ef824fdf-f02f-406a-9040-c3d7fa04f0bf is in state STARTED 2026-03-17 00:52:01.195164 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:01.197645 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:01.198258 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:01.200829 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:01.201474 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:01.202893 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task 1d25fb9e-a092-4f1d-bf4d-f9d62b6ba4e4 is in state STARTED 2026-03-17 00:52:01.203584 | orchestrator | 2026-03-17 00:52:01 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:01.203621 | orchestrator | 2026-03-17 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:04.235045 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task ef824fdf-f02f-406a-9040-c3d7fa04f0bf is in state STARTED 2026-03-17 00:52:04.235456 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:04.235982 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:04.236562 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:04.237112 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:04.237683 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:04.238238 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task 1d25fb9e-a092-4f1d-bf4d-f9d62b6ba4e4 is in state STARTED 2026-03-17 00:52:04.238743 | orchestrator | 2026-03-17 00:52:04 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:04.238824 | orchestrator | 2026-03-17 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:07.701116 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task ef824fdf-f02f-406a-9040-c3d7fa04f0bf is in state STARTED 2026-03-17 00:52:07.701435 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:07.704259 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:07.704345 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:07.704357 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:07.704365 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:07.704373 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task 1d25fb9e-a092-4f1d-bf4d-f9d62b6ba4e4 is in state STARTED 2026-03-17 00:52:07.704380 | orchestrator | 2026-03-17 00:52:07 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:07.704405 | orchestrator | 2026-03-17 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:10.773532 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task ef824fdf-f02f-406a-9040-c3d7fa04f0bf is in state STARTED 2026-03-17 00:52:10.773606 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:10.773613 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:10.773619 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:10.773625 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:10.773630 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:10.839886 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task 1d25fb9e-a092-4f1d-bf4d-f9d62b6ba4e4 is in state STARTED 2026-03-17 00:52:10.839967 | orchestrator | 2026-03-17 00:52:10 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:10.839977 | orchestrator | 2026-03-17 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:13.820073 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task ef824fdf-f02f-406a-9040-c3d7fa04f0bf is in state STARTED 2026-03-17 00:52:13.823085 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:13.825517 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:13.825849 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:13.833451 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:13.838236 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:13.839918 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task 1d25fb9e-a092-4f1d-bf4d-f9d62b6ba4e4 is in state SUCCESS 2026-03-17 00:52:13.843277 | orchestrator | 2026-03-17 00:52:13 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:13.843338 | orchestrator | 2026-03-17 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:16.901594 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task ef824fdf-f02f-406a-9040-c3d7fa04f0bf is in state STARTED 2026-03-17 00:52:16.901665 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:16.901670 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:16.902852 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:16.902928 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:16.903698 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:16.904112 | orchestrator | 2026-03-17 00:52:16 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:16.904134 | orchestrator | 2026-03-17 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:19.969180 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task ef824fdf-f02f-406a-9040-c3d7fa04f0bf is in state STARTED 2026-03-17 00:52:19.974443 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:19.978473 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:19.995033 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:19.996867 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:20.001986 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:20.002259 | orchestrator | 2026-03-17 00:52:19 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:20.002270 | orchestrator | 2026-03-17 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:23.233728 | orchestrator | 2026-03-17 00:52:23.233798 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:52:23.233804 | orchestrator | 2026-03-17 00:52:23.233809 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:52:23.233813 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:00.284) 0:00:00.284 ********* 2026-03-17 00:52:23.233818 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:52:23.233823 | orchestrator | 2026-03-17 00:52:23.233828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:52:23.233832 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:00.102) 0:00:00.387 ********* 2026-03-17 00:52:23.233837 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-17 00:52:23.233841 | orchestrator | 2026-03-17 00:52:23.233845 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-17 00:52:23.233849 | orchestrator | 2026-03-17 00:52:23.233853 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 00:52:23.233857 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:00.154) 0:00:00.542 ********* 2026-03-17 00:52:23.233861 | orchestrator | included: /ansible/roles/opensearch/tasks/pull.yml for testbed-node-0 2026-03-17 00:52:23.233865 | orchestrator | 2026-03-17 00:52:23.233869 | orchestrator | TASK [service-images-pull : opensearch | Pull images] ************************** 2026-03-17 00:52:23.233873 | orchestrator | Tuesday 17 March 2026 00:49:38 +0000 (0:00:00.182) 0:00:00.724 ********* 2026-03-17 00:52:23.233877 | orchestrator | changed: [testbed-node-0] => (item=opensearch) 2026-03-17 00:52:23.233881 | orchestrator | 2026-03-17 00:52:23.233885 | orchestrator | STILL ALIVE [task 'service-images-pull : opensearch | Pull images' is running] *** 2026-03-17 00:52:23.233890 | orchestrator | changed: [testbed-node-0] => (item=opensearch-dashboards) 2026-03-17 00:52:23.233894 | orchestrator | 2026-03-17 00:52:23.233897 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:52:23.233901 | orchestrator | testbed-node-0 : ok=4  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:52:23.233907 | orchestrator | 2026-03-17 00:52:23.233911 | orchestrator | 2026-03-17 00:52:23.233915 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:52:23.233973 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:02:32.265) 0:02:32.990 ********* 2026-03-17 00:52:23.233982 | orchestrator | =============================================================================== 2026-03-17 00:52:23.233988 | orchestrator | service-images-pull : opensearch | Pull images ------------------------ 152.27s 2026-03-17 00:52:23.233993 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.18s 2026-03-17 00:52:23.234084 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.15s 2026-03-17 00:52:23.234091 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.10s 2026-03-17 00:52:23.234097 | orchestrator | 2026-03-17 00:52:23.234104 | orchestrator | 2026-03-17 00:52:23.234111 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-17 00:52:23.234118 | orchestrator | 2026-03-17 00:52:23.234125 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-17 00:52:23.234131 | orchestrator | Tuesday 17 March 2026 00:52:03 +0000 (0:00:00.684) 0:00:00.684 ********* 2026-03-17 00:52:23.234138 | orchestrator | changed: [testbed-manager] 2026-03-17 00:52:23.234144 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:52:23.234148 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:52:23.234152 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:52:23.234155 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:52:23.234159 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:52:23.234163 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:52:23.234167 | orchestrator | 2026-03-17 00:52:23.234171 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-17 00:52:23.234182 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:04.738) 0:00:05.423 ********* 2026-03-17 00:52:23.234188 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-17 00:52:23.234195 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-17 00:52:23.234201 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-17 00:52:23.234207 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-17 00:52:23.234213 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-17 00:52:23.234219 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-17 00:52:23.234225 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-17 00:52:23.234231 | orchestrator | 2026-03-17 00:52:23.234236 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-17 00:52:23.234243 | orchestrator | Tuesday 17 March 2026 00:52:12 +0000 (0:00:03.649) 0:00:09.073 ********* 2026-03-17 00:52:23.234253 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:52:09.368675', 'end': '2026-03-17 00:52:09.375334', 'delta': '0:00:00.006659', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:52:23.234283 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:52:09.765294', 'end': '2026-03-17 00:52:09.772415', 'delta': '0:00:00.007121', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:52:23.234299 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:52:10.682283', 'end': '2026-03-17 00:52:10.687221', 'delta': '0:00:00.004938', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:52:23.234305 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:52:11.005068', 'end': '2026-03-17 00:52:11.013637', 'delta': '0:00:00.008569', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:52:23.234317 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:52:11.926069', 'end': '2026-03-17 00:52:11.932889', 'delta': '0:00:00.006820', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:52:23.234324 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:52:12.140087', 'end': '2026-03-17 00:52:12.148152', 'delta': '0:00:00.008065', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:52:23.234337 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-17 00:52:09.288071', 'end': '2026-03-17 00:52:09.295161', 'delta': '0:00:00.007090', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-17 00:52:23.234349 | orchestrator | 2026-03-17 00:52:23.234354 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-17 00:52:23.234359 | orchestrator | Tuesday 17 March 2026 00:52:13 +0000 (0:00:01.455) 0:00:10.530 ********* 2026-03-17 00:52:23.234363 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-17 00:52:23.234368 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-17 00:52:23.234372 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-17 00:52:23.234376 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-17 00:52:23.234381 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-17 00:52:23.234385 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-17 00:52:23.234389 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-17 00:52:23.234394 | orchestrator | 2026-03-17 00:52:23.234399 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-17 00:52:23.234403 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:02.789) 0:00:13.319 ********* 2026-03-17 00:52:23.234407 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-17 00:52:23.234412 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-17 00:52:23.234416 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-17 00:52:23.234420 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-17 00:52:23.234424 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-17 00:52:23.234429 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-17 00:52:23.234433 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-17 00:52:23.234437 | orchestrator | 2026-03-17 00:52:23.234442 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:52:23.234446 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:52:23.234451 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:52:23.234455 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:52:23.234459 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:52:23.234467 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:52:23.234472 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:52:23.234476 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:52:23.234480 | orchestrator | 2026-03-17 00:52:23.234485 | orchestrator | 2026-03-17 00:52:23.234489 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:52:23.234493 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:02.580) 0:00:15.900 ********* 2026-03-17 00:52:23.234498 | orchestrator | =============================================================================== 2026-03-17 00:52:23.234502 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.74s 2026-03-17 00:52:23.234506 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.65s 2026-03-17 00:52:23.234514 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.79s 2026-03-17 00:52:23.234519 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.58s 2026-03-17 00:52:23.234523 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.46s 2026-03-17 00:52:23.234527 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task ef824fdf-f02f-406a-9040-c3d7fa04f0bf is in state SUCCESS 2026-03-17 00:52:23.234532 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:23.234537 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:23.234544 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:23.234548 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:23.234553 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:23.234557 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:23.234561 | orchestrator | 2026-03-17 00:52:23 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:23.234565 | orchestrator | 2026-03-17 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:26.265189 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:26.265250 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:26.269732 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:26.269785 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:26.269970 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:26.270406 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:26.271174 | orchestrator | 2026-03-17 00:52:26 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:26.271311 | orchestrator | 2026-03-17 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:29.302374 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:29.302488 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:29.303622 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:29.304410 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:29.305232 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:29.305943 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:29.306796 | orchestrator | 2026-03-17 00:52:29 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:29.306874 | orchestrator | 2026-03-17 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:32.341488 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:32.342805 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:32.346150 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:32.347137 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:32.348727 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:32.349584 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:32.351895 | orchestrator | 2026-03-17 00:52:32 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:32.351942 | orchestrator | 2026-03-17 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:35.448947 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:35.449007 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:35.461173 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:35.465896 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:35.465955 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:35.465961 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:35.465966 | orchestrator | 2026-03-17 00:52:35 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:35.465972 | orchestrator | 2026-03-17 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:38.572736 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:38.572814 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:38.572827 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:38.572833 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:38.572839 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:38.572847 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:38.572856 | orchestrator | 2026-03-17 00:52:38 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:38.572865 | orchestrator | 2026-03-17 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:41.594673 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:41.594903 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:41.594925 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state STARTED 2026-03-17 00:52:41.594931 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:41.594937 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:41.594957 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:41.594963 | orchestrator | 2026-03-17 00:52:41 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:41.594968 | orchestrator | 2026-03-17 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:44.629263 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:44.632406 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:44.632910 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task a4cb237e-996a-4093-8ae8-5ceeab28ab9e is in state SUCCESS 2026-03-17 00:52:44.633909 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:44.635276 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:44.636982 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:44.637768 | orchestrator | 2026-03-17 00:52:44 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:44.637802 | orchestrator | 2026-03-17 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:47.695429 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:47.695485 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:47.695593 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:47.696926 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:47.697194 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:47.697890 | orchestrator | 2026-03-17 00:52:47 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:47.697920 | orchestrator | 2026-03-17 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:50.778595 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:50.778825 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state STARTED 2026-03-17 00:52:50.780193 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:50.780798 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:50.783026 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:50.784418 | orchestrator | 2026-03-17 00:52:50 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:50.784459 | orchestrator | 2026-03-17 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:53.884683 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:53.886390 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task d310ffed-b990-422c-ad55-0ba7ee17d4b2 is in state SUCCESS 2026-03-17 00:52:53.887611 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:53.889090 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:53.890298 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:53.891749 | orchestrator | 2026-03-17 00:52:53 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:53.891768 | orchestrator | 2026-03-17 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:56.924298 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:56.925261 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:56.927187 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:56.928844 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:56.929307 | orchestrator | 2026-03-17 00:52:56 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:56.930218 | orchestrator | 2026-03-17 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:52:59.979743 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:52:59.981241 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:52:59.984929 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:52:59.989007 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:52:59.997764 | orchestrator | 2026-03-17 00:52:59 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:52:59.997812 | orchestrator | 2026-03-17 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:03.051478 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:03.052631 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:03.053184 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:03.054064 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:03.055726 | orchestrator | 2026-03-17 00:53:03 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:03.055765 | orchestrator | 2026-03-17 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:06.210709 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:06.212657 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:06.217029 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:06.219913 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:06.225956 | orchestrator | 2026-03-17 00:53:06 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:06.226506 | orchestrator | 2026-03-17 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:09.356090 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:09.359146 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:09.360205 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:09.361286 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:09.362646 | orchestrator | 2026-03-17 00:53:09 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:09.362718 | orchestrator | 2026-03-17 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:12.422952 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:12.427338 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:12.428025 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:12.439511 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:12.439577 | orchestrator | 2026-03-17 00:53:12 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:12.439586 | orchestrator | 2026-03-17 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:15.487127 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:15.487301 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:15.490966 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:15.491658 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:15.496090 | orchestrator | 2026-03-17 00:53:15 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:15.496151 | orchestrator | 2026-03-17 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:18.585500 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:18.585627 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:18.590573 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:18.592617 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:18.594456 | orchestrator | 2026-03-17 00:53:18 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:18.594522 | orchestrator | 2026-03-17 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:21.680995 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:21.681045 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:21.682365 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:21.684049 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:21.684548 | orchestrator | 2026-03-17 00:53:21 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:21.684561 | orchestrator | 2026-03-17 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:24.726652 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:24.729680 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:24.733133 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:24.735620 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:24.741584 | orchestrator | 2026-03-17 00:53:24 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:24.741634 | orchestrator | 2026-03-17 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:27.781489 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:27.783140 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:27.784659 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:27.787024 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:27.788536 | orchestrator | 2026-03-17 00:53:27 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:27.788584 | orchestrator | 2026-03-17 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:30.832746 | orchestrator | 2026-03-17 00:53:30 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:30.833218 | orchestrator | 2026-03-17 00:53:30 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:30.834314 | orchestrator | 2026-03-17 00:53:30 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:30.836825 | orchestrator | 2026-03-17 00:53:30 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:30.838084 | orchestrator | 2026-03-17 00:53:30 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:30.838120 | orchestrator | 2026-03-17 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:33.872987 | orchestrator | 2026-03-17 00:53:33 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:33.874544 | orchestrator | 2026-03-17 00:53:33 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:33.875887 | orchestrator | 2026-03-17 00:53:33 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:33.877194 | orchestrator | 2026-03-17 00:53:33 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:33.878438 | orchestrator | 2026-03-17 00:53:33 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:33.878482 | orchestrator | 2026-03-17 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:36.916888 | orchestrator | 2026-03-17 00:53:36 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:36.917946 | orchestrator | 2026-03-17 00:53:36 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:36.920475 | orchestrator | 2026-03-17 00:53:36 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:36.923699 | orchestrator | 2026-03-17 00:53:36 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:36.925442 | orchestrator | 2026-03-17 00:53:36 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state STARTED 2026-03-17 00:53:36.926624 | orchestrator | 2026-03-17 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:39.994939 | orchestrator | 2026-03-17 00:53:39 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:39.996895 | orchestrator | 2026-03-17 00:53:39 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:40.005172 | orchestrator | 2026-03-17 00:53:40 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:40.005216 | orchestrator | 2026-03-17 00:53:40 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:40.005221 | orchestrator | 2026-03-17 00:53:40 | INFO  | Task 09bff0ae-7ce5-4815-a240-7f0720918d1c is in state SUCCESS 2026-03-17 00:53:40.005226 | orchestrator | 2026-03-17 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:40.008217 | orchestrator | 2026-03-17 00:53:40.008272 | orchestrator | 2026-03-17 00:53:40.008281 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-17 00:53:40.008288 | orchestrator | 2026-03-17 00:53:40.008295 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-17 00:53:40.008302 | orchestrator | Tuesday 17 March 2026 00:52:04 +0000 (0:00:00.739) 0:00:00.739 ********* 2026-03-17 00:53:40.008308 | orchestrator | ok: [testbed-manager] => { 2026-03-17 00:53:40.008316 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-17 00:53:40.008323 | orchestrator | } 2026-03-17 00:53:40.008330 | orchestrator | 2026-03-17 00:53:40.008337 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-17 00:53:40.008343 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.672) 0:00:01.412 ********* 2026-03-17 00:53:40.008350 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.008357 | orchestrator | 2026-03-17 00:53:40.008364 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-17 00:53:40.008384 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:02.563) 0:00:03.975 ********* 2026-03-17 00:53:40.008391 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-17 00:53:40.008398 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-17 00:53:40.008406 | orchestrator | 2026-03-17 00:53:40.008410 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-17 00:53:40.008414 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:01.001) 0:00:04.977 ********* 2026-03-17 00:53:40.008420 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.008426 | orchestrator | 2026-03-17 00:53:40.008433 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-17 00:53:40.008439 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:00:01.429) 0:00:06.406 ********* 2026-03-17 00:53:40.008446 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.008452 | orchestrator | 2026-03-17 00:53:40.008458 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-17 00:53:40.008464 | orchestrator | Tuesday 17 March 2026 00:52:13 +0000 (0:00:03.666) 0:00:10.073 ********* 2026-03-17 00:53:40.008471 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-17 00:53:40.008477 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.008483 | orchestrator | 2026-03-17 00:53:40.008489 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-17 00:53:40.008495 | orchestrator | Tuesday 17 March 2026 00:52:39 +0000 (0:00:25.592) 0:00:35.665 ********* 2026-03-17 00:53:40.008515 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.008522 | orchestrator | 2026-03-17 00:53:40.008528 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:53:40.008535 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:40.008542 | orchestrator | 2026-03-17 00:53:40.008548 | orchestrator | 2026-03-17 00:53:40.008554 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:53:40.008560 | orchestrator | Tuesday 17 March 2026 00:52:42 +0000 (0:00:02.933) 0:00:38.599 ********* 2026-03-17 00:53:40.008566 | orchestrator | =============================================================================== 2026-03-17 00:53:40.008573 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.59s 2026-03-17 00:53:40.008579 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.67s 2026-03-17 00:53:40.008585 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.93s 2026-03-17 00:53:40.008591 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.56s 2026-03-17 00:53:40.008597 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.43s 2026-03-17 00:53:40.008607 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.00s 2026-03-17 00:53:40.008614 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.67s 2026-03-17 00:53:40.008620 | orchestrator | 2026-03-17 00:53:40.008626 | orchestrator | 2026-03-17 00:53:40.008632 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-17 00:53:40.008637 | orchestrator | 2026-03-17 00:53:40.008643 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-17 00:53:40.008649 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:01.446) 0:00:01.446 ********* 2026-03-17 00:53:40.008656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-17 00:53:40.008663 | orchestrator | 2026-03-17 00:53:40.008669 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-17 00:53:40.008675 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.386) 0:00:01.832 ********* 2026-03-17 00:53:40.008681 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-17 00:53:40.008688 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-17 00:53:40.008695 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-17 00:53:40.008701 | orchestrator | 2026-03-17 00:53:40.008708 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-17 00:53:40.008714 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:02.476) 0:00:04.309 ********* 2026-03-17 00:53:40.008720 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.008726 | orchestrator | 2026-03-17 00:53:40.008733 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-17 00:53:40.008739 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:00:02.128) 0:00:06.437 ********* 2026-03-17 00:53:40.008755 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-17 00:53:40.008762 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.008768 | orchestrator | 2026-03-17 00:53:40.008774 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-17 00:53:40.008780 | orchestrator | Tuesday 17 March 2026 00:52:44 +0000 (0:00:33.986) 0:00:40.424 ********* 2026-03-17 00:53:40.008788 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.008799 | orchestrator | 2026-03-17 00:53:40.008810 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-17 00:53:40.008822 | orchestrator | Tuesday 17 March 2026 00:52:45 +0000 (0:00:01.004) 0:00:41.429 ********* 2026-03-17 00:53:40.008842 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.008854 | orchestrator | 2026-03-17 00:53:40.008865 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-17 00:53:40.008875 | orchestrator | Tuesday 17 March 2026 00:52:46 +0000 (0:00:01.240) 0:00:42.669 ********* 2026-03-17 00:53:40.008887 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.008894 | orchestrator | 2026-03-17 00:53:40.008900 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-17 00:53:40.008907 | orchestrator | Tuesday 17 March 2026 00:52:49 +0000 (0:00:02.438) 0:00:45.108 ********* 2026-03-17 00:53:40.008915 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.008922 | orchestrator | 2026-03-17 00:53:40.008929 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-17 00:53:40.008935 | orchestrator | Tuesday 17 March 2026 00:52:50 +0000 (0:00:01.234) 0:00:46.342 ********* 2026-03-17 00:53:40.008941 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.008948 | orchestrator | 2026-03-17 00:53:40.008954 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-17 00:53:40.008961 | orchestrator | Tuesday 17 March 2026 00:52:50 +0000 (0:00:00.504) 0:00:46.847 ********* 2026-03-17 00:53:40.008967 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.008974 | orchestrator | 2026-03-17 00:53:40.008980 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:53:40.008987 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:40.008994 | orchestrator | 2026-03-17 00:53:40.009000 | orchestrator | 2026-03-17 00:53:40.009005 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:53:40.009011 | orchestrator | Tuesday 17 March 2026 00:52:51 +0000 (0:00:00.371) 0:00:47.218 ********* 2026-03-17 00:53:40.009018 | orchestrator | =============================================================================== 2026-03-17 00:53:40.009025 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.99s 2026-03-17 00:53:40.009032 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.48s 2026-03-17 00:53:40.009038 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.44s 2026-03-17 00:53:40.009046 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.13s 2026-03-17 00:53:40.009053 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.24s 2026-03-17 00:53:40.009059 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.23s 2026-03-17 00:53:40.009065 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.01s 2026-03-17 00:53:40.009110 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.50s 2026-03-17 00:53:40.009117 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.39s 2026-03-17 00:53:40.009124 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.37s 2026-03-17 00:53:40.009131 | orchestrator | 2026-03-17 00:53:40.009137 | orchestrator | 2026-03-17 00:53:40.009144 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:53:40.009151 | orchestrator | 2026-03-17 00:53:40.009158 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:53:40.009165 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:01.293) 0:00:01.293 ********* 2026-03-17 00:53:40.009172 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-17 00:53:40.009179 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-17 00:53:40.009187 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-17 00:53:40.009199 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-17 00:53:40.009209 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-17 00:53:40.009222 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-17 00:53:40.009244 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-17 00:53:40.009255 | orchestrator | 2026-03-17 00:53:40.009291 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-17 00:53:40.009299 | orchestrator | 2026-03-17 00:53:40.009305 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-17 00:53:40.009311 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:02.279) 0:00:03.573 ********* 2026-03-17 00:53:40.009327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:53:40.009335 | orchestrator | 2026-03-17 00:53:40.009342 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-17 00:53:40.009349 | orchestrator | Tuesday 17 March 2026 00:52:09 +0000 (0:00:01.642) 0:00:05.215 ********* 2026-03-17 00:53:40.009356 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:40.009363 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:53:40.009369 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:53:40.009376 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:40.009383 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:53:40.009398 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:40.009405 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.009412 | orchestrator | 2026-03-17 00:53:40.009419 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-17 00:53:40.009426 | orchestrator | Tuesday 17 March 2026 00:52:14 +0000 (0:00:04.998) 0:00:10.213 ********* 2026-03-17 00:53:40.009433 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.009440 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:40.009447 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:40.009454 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:40.009461 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:53:40.009468 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:53:40.009475 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:53:40.009481 | orchestrator | 2026-03-17 00:53:40.009487 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-17 00:53:40.009492 | orchestrator | Tuesday 17 March 2026 00:52:18 +0000 (0:00:04.398) 0:00:14.611 ********* 2026-03-17 00:53:40.009498 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.009504 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:53:40.009509 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:40.009515 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:40.009521 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:40.009527 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:53:40.009533 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:53:40.009539 | orchestrator | 2026-03-17 00:53:40.009545 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-17 00:53:40.009552 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:02.052) 0:00:16.663 ********* 2026-03-17 00:53:40.009559 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.009566 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:53:40.009573 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:40.009580 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:40.009587 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:40.009594 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:53:40.009601 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:53:40.009608 | orchestrator | 2026-03-17 00:53:40.009615 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-17 00:53:40.009622 | orchestrator | Tuesday 17 March 2026 00:52:30 +0000 (0:00:09.882) 0:00:26.545 ********* 2026-03-17 00:53:40.009629 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:40.009636 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:53:40.009643 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:53:40.009650 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.009663 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:53:40.009670 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:40.009677 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:40.009684 | orchestrator | 2026-03-17 00:53:40.009691 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-17 00:53:40.009719 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:37.332) 0:01:03.878 ********* 2026-03-17 00:53:40.009726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:53:40.009733 | orchestrator | 2026-03-17 00:53:40.009739 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-17 00:53:40.009745 | orchestrator | Tuesday 17 March 2026 00:53:09 +0000 (0:00:01.739) 0:01:05.618 ********* 2026-03-17 00:53:40.009751 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-17 00:53:40.009758 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-17 00:53:40.009765 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-17 00:53:40.009771 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-17 00:53:40.009777 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-17 00:53:40.009783 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-17 00:53:40.009789 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-17 00:53:40.009798 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-17 00:53:40.009804 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-17 00:53:40.009810 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-17 00:53:40.009816 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-17 00:53:40.009822 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-17 00:53:40.009828 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-17 00:53:40.009834 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-17 00:53:40.009840 | orchestrator | 2026-03-17 00:53:40.009846 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-17 00:53:40.009853 | orchestrator | Tuesday 17 March 2026 00:53:14 +0000 (0:00:05.408) 0:01:11.026 ********* 2026-03-17 00:53:40.009859 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.009865 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:40.009872 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:40.009878 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:40.009884 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:53:40.009890 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:53:40.009896 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:53:40.009902 | orchestrator | 2026-03-17 00:53:40.009909 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-17 00:53:40.009915 | orchestrator | Tuesday 17 March 2026 00:53:16 +0000 (0:00:01.811) 0:01:12.838 ********* 2026-03-17 00:53:40.009921 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.009927 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:40.009933 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:40.009939 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:40.009945 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:53:40.009951 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:53:40.009958 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:53:40.009964 | orchestrator | 2026-03-17 00:53:40.009971 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-17 00:53:40.009983 | orchestrator | Tuesday 17 March 2026 00:53:18 +0000 (0:00:01.550) 0:01:14.388 ********* 2026-03-17 00:53:40.009991 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:40.009997 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.010003 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:40.010060 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:40.010109 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:53:40.010115 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:53:40.010119 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:53:40.010122 | orchestrator | 2026-03-17 00:53:40.010126 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-17 00:53:40.010130 | orchestrator | Tuesday 17 March 2026 00:53:20 +0000 (0:00:02.067) 0:01:16.456 ********* 2026-03-17 00:53:40.010134 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:53:40.010138 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:53:40.010142 | orchestrator | ok: [testbed-manager] 2026-03-17 00:53:40.010145 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:53:40.010149 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:53:40.010153 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:53:40.010157 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:53:40.010160 | orchestrator | 2026-03-17 00:53:40.010164 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-17 00:53:40.010168 | orchestrator | Tuesday 17 March 2026 00:53:22 +0000 (0:00:01.921) 0:01:18.378 ********* 2026-03-17 00:53:40.010172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-17 00:53:40.010177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:53:40.010182 | orchestrator | 2026-03-17 00:53:40.010186 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-17 00:53:40.010190 | orchestrator | Tuesday 17 March 2026 00:53:24 +0000 (0:00:01.991) 0:01:20.369 ********* 2026-03-17 00:53:40.010194 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.010197 | orchestrator | 2026-03-17 00:53:40.010201 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-17 00:53:40.010205 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:01.784) 0:01:22.154 ********* 2026-03-17 00:53:40.010209 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:53:40.010213 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:53:40.010216 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:53:40.010220 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:53:40.010224 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:53:40.010228 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:53:40.010231 | orchestrator | changed: [testbed-manager] 2026-03-17 00:53:40.010235 | orchestrator | 2026-03-17 00:53:40.010239 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:53:40.010243 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:40.010247 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:40.010251 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:40.010255 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:40.010259 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:40.010262 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:40.010269 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:53:40.010273 | orchestrator | 2026-03-17 00:53:40.010277 | orchestrator | 2026-03-17 00:53:40.010281 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:53:40.010291 | orchestrator | Tuesday 17 March 2026 00:53:37 +0000 (0:00:11.249) 0:01:33.403 ********* 2026-03-17 00:53:40.010295 | orchestrator | =============================================================================== 2026-03-17 00:53:40.010299 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 37.33s 2026-03-17 00:53:40.010303 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.25s 2026-03-17 00:53:40.010306 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.88s 2026-03-17 00:53:40.010310 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.41s 2026-03-17 00:53:40.010314 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 5.00s 2026-03-17 00:53:40.010318 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.40s 2026-03-17 00:53:40.010321 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.28s 2026-03-17 00:53:40.010325 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.07s 2026-03-17 00:53:40.010329 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.05s 2026-03-17 00:53:40.010333 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.99s 2026-03-17 00:53:40.010337 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.92s 2026-03-17 00:53:40.010344 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.81s 2026-03-17 00:53:40.010348 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.78s 2026-03-17 00:53:40.010352 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.74s 2026-03-17 00:53:40.010356 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.64s 2026-03-17 00:53:40.010360 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.55s 2026-03-17 00:53:43.058006 | orchestrator | 2026-03-17 00:53:43 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:43.058738 | orchestrator | 2026-03-17 00:53:43 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:43.060673 | orchestrator | 2026-03-17 00:53:43 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:43.060716 | orchestrator | 2026-03-17 00:53:43 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:43.060726 | orchestrator | 2026-03-17 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:46.131278 | orchestrator | 2026-03-17 00:53:46 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:46.133313 | orchestrator | 2026-03-17 00:53:46 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:46.136587 | orchestrator | 2026-03-17 00:53:46 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:46.140806 | orchestrator | 2026-03-17 00:53:46 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state STARTED 2026-03-17 00:53:46.140849 | orchestrator | 2026-03-17 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:49.214587 | orchestrator | 2026-03-17 00:53:49 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:49.217037 | orchestrator | 2026-03-17 00:53:49 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:49.218805 | orchestrator | 2026-03-17 00:53:49 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:49.220223 | orchestrator | 2026-03-17 00:53:49 | INFO  | Task 1a727c7d-a13e-4a06-995b-b0fdad720250 is in state SUCCESS 2026-03-17 00:53:49.220273 | orchestrator | 2026-03-17 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:52.293646 | orchestrator | 2026-03-17 00:53:52 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:52.295011 | orchestrator | 2026-03-17 00:53:52 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:52.297479 | orchestrator | 2026-03-17 00:53:52 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:52.297524 | orchestrator | 2026-03-17 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:55.358394 | orchestrator | 2026-03-17 00:53:55 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:55.361293 | orchestrator | 2026-03-17 00:53:55 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:55.364006 | orchestrator | 2026-03-17 00:53:55 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:55.364054 | orchestrator | 2026-03-17 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:53:58.404031 | orchestrator | 2026-03-17 00:53:58 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:53:58.405616 | orchestrator | 2026-03-17 00:53:58 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:53:58.407361 | orchestrator | 2026-03-17 00:53:58 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:53:58.407411 | orchestrator | 2026-03-17 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:01.464526 | orchestrator | 2026-03-17 00:54:01 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:01.466329 | orchestrator | 2026-03-17 00:54:01 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:54:01.468468 | orchestrator | 2026-03-17 00:54:01 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:01.468510 | orchestrator | 2026-03-17 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:04.510231 | orchestrator | 2026-03-17 00:54:04 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:04.512764 | orchestrator | 2026-03-17 00:54:04 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:54:04.514841 | orchestrator | 2026-03-17 00:54:04 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:04.514917 | orchestrator | 2026-03-17 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:07.552918 | orchestrator | 2026-03-17 00:54:07 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:07.553015 | orchestrator | 2026-03-17 00:54:07 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:54:07.553378 | orchestrator | 2026-03-17 00:54:07 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:07.553402 | orchestrator | 2026-03-17 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:10.581163 | orchestrator | 2026-03-17 00:54:10 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:10.582565 | orchestrator | 2026-03-17 00:54:10 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:54:10.584433 | orchestrator | 2026-03-17 00:54:10 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:10.584515 | orchestrator | 2026-03-17 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:13.624615 | orchestrator | 2026-03-17 00:54:13 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:13.624677 | orchestrator | 2026-03-17 00:54:13 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:54:13.627446 | orchestrator | 2026-03-17 00:54:13 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:13.627493 | orchestrator | 2026-03-17 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:16.667351 | orchestrator | 2026-03-17 00:54:16 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:16.669723 | orchestrator | 2026-03-17 00:54:16 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state STARTED 2026-03-17 00:54:16.671504 | orchestrator | 2026-03-17 00:54:16 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:16.671623 | orchestrator | 2026-03-17 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:19.707378 | orchestrator | 2026-03-17 00:54:19 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:19.710965 | orchestrator | 2026-03-17 00:54:19 | INFO  | Task 9ca6b2d7-3d71-4754-9fcb-4e1d690b1384 is in state SUCCESS 2026-03-17 00:54:19.712195 | orchestrator | 2026-03-17 00:54:19.712260 | orchestrator | 2026-03-17 00:54:19.712271 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-17 00:54:19.712279 | orchestrator | 2026-03-17 00:54:19.712286 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-17 00:54:19.712293 | orchestrator | Tuesday 17 March 2026 00:52:23 +0000 (0:00:00.264) 0:00:00.264 ********* 2026-03-17 00:54:19.712300 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:19.712311 | orchestrator | 2026-03-17 00:54:19.712322 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-17 00:54:19.712344 | orchestrator | Tuesday 17 March 2026 00:52:25 +0000 (0:00:01.709) 0:00:01.973 ********* 2026-03-17 00:54:19.712359 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-17 00:54:19.712369 | orchestrator | 2026-03-17 00:54:19.712380 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-17 00:54:19.712392 | orchestrator | Tuesday 17 March 2026 00:52:26 +0000 (0:00:00.865) 0:00:02.839 ********* 2026-03-17 00:54:19.712402 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.712414 | orchestrator | 2026-03-17 00:54:19.712425 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-17 00:54:19.712437 | orchestrator | Tuesday 17 March 2026 00:52:27 +0000 (0:00:01.147) 0:00:03.986 ********* 2026-03-17 00:54:19.712449 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-17 00:54:19.712461 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:19.712472 | orchestrator | 2026-03-17 00:54:19.712484 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-17 00:54:19.712495 | orchestrator | Tuesday 17 March 2026 00:53:43 +0000 (0:01:16.436) 0:01:20.423 ********* 2026-03-17 00:54:19.712506 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.712514 | orchestrator | 2026-03-17 00:54:19.712521 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:54:19.712528 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:54:19.712536 | orchestrator | 2026-03-17 00:54:19.712543 | orchestrator | 2026-03-17 00:54:19.712550 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:54:19.712557 | orchestrator | Tuesday 17 March 2026 00:53:47 +0000 (0:00:03.920) 0:01:24.343 ********* 2026-03-17 00:54:19.712563 | orchestrator | =============================================================================== 2026-03-17 00:54:19.712583 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 76.44s 2026-03-17 00:54:19.712590 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.92s 2026-03-17 00:54:19.712597 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.71s 2026-03-17 00:54:19.712604 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.15s 2026-03-17 00:54:19.712611 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.87s 2026-03-17 00:54:19.712618 | orchestrator | 2026-03-17 00:54:19.712624 | orchestrator | 2026-03-17 00:54:19.712631 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-17 00:54:19.712638 | orchestrator | 2026-03-17 00:54:19.712645 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-17 00:54:19.712651 | orchestrator | Tuesday 17 March 2026 00:51:57 +0000 (0:00:00.295) 0:00:00.295 ********* 2026-03-17 00:54:19.712658 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:54:19.712666 | orchestrator | 2026-03-17 00:54:19.712673 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-17 00:54:19.712679 | orchestrator | Tuesday 17 March 2026 00:51:58 +0000 (0:00:01.316) 0:00:01.612 ********* 2026-03-17 00:54:19.712686 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:54:19.712693 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:54:19.712700 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:54:19.712706 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:54:19.712715 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:54:19.712727 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:54:19.712746 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:54:19.712758 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:54:19.712777 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:54:19.712790 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:54:19.712804 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:54:19.712817 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:54:19.712830 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:54:19.712842 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:54:19.712851 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:54:19.712859 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:54:19.712879 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-17 00:54:19.712887 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:54:19.712895 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:54:19.712903 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-17 00:54:19.712915 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-17 00:54:19.712923 | orchestrator | 2026-03-17 00:54:19.712931 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-17 00:54:19.712939 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:04.203) 0:00:05.816 ********* 2026-03-17 00:54:19.712953 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:54:19.712962 | orchestrator | 2026-03-17 00:54:19.712969 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-17 00:54:19.712975 | orchestrator | Tuesday 17 March 2026 00:52:04 +0000 (0:00:01.544) 0:00:07.360 ********* 2026-03-17 00:54:19.712986 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.712997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.713012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.713019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.713026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.713038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.713061 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.713068 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713115 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713169 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713184 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.713223 | orchestrator | 2026-03-17 00:54:19.713230 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-17 00:54:19.713237 | orchestrator | Tuesday 17 March 2026 00:52:09 +0000 (0:00:04.926) 0:00:12.287 ********* 2026-03-17 00:54:19.713252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713275 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713282 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713304 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:19.713311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713352 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713359 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:54:19.713366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713373 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:19.713380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713427 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:19.713434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713441 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:19.713448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713455 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:19.713462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713476 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:19.713483 | orchestrator | 2026-03-17 00:54:19.713490 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-17 00:54:19.713497 | orchestrator | Tuesday 17 March 2026 00:52:12 +0000 (0:00:03.325) 0:00:15.612 ********* 2026-03-17 00:54:19.713504 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713539 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713564 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.713586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.713597 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:54:19.713604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.720317 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:19.720333 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:19.720347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720361 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:19.720497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720603 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:19.720612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.720621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.720657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720697 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:19.720710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.720738 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:19.720752 | orchestrator | 2026-03-17 00:54:19.720767 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-17 00:54:19.720783 | orchestrator | Tuesday 17 March 2026 00:52:15 +0000 (0:00:02.681) 0:00:18.293 ********* 2026-03-17 00:54:19.720797 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:54:19.720810 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:19.720824 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:19.720845 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:19.720859 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:19.720873 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:19.720888 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:19.720902 | orchestrator | 2026-03-17 00:54:19.720915 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-17 00:54:19.720929 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:01.704) 0:00:19.998 ********* 2026-03-17 00:54:19.720943 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:54:19.720957 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:19.720971 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:19.720981 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:19.720989 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:19.720996 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:19.721004 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:19.721015 | orchestrator | 2026-03-17 00:54:19.721034 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-17 00:54:19.721050 | orchestrator | Tuesday 17 March 2026 00:52:18 +0000 (0:00:01.309) 0:00:21.308 ********* 2026-03-17 00:54:19.721063 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:54:19.721076 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:19.721108 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:19.721121 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:19.721134 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:19.721147 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:19.721161 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:19.721174 | orchestrator | 2026-03-17 00:54:19.721188 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-17 00:54:19.721201 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:01.588) 0:00:22.897 ********* 2026-03-17 00:54:19.721210 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.721221 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:19.721230 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:19.721238 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:19.721248 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:19.721257 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:19.721265 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:19.721274 | orchestrator | 2026-03-17 00:54:19.721283 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-17 00:54:19.721292 | orchestrator | Tuesday 17 March 2026 00:52:23 +0000 (0:00:03.678) 0:00:26.575 ********* 2026-03-17 00:54:19.721311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.721327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.721338 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.721355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.721364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.721383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.721392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.721402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721452 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721551 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.721570 | orchestrator | 2026-03-17 00:54:19.721579 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-17 00:54:19.721589 | orchestrator | Tuesday 17 March 2026 00:52:28 +0000 (0:00:05.144) 0:00:31.719 ********* 2026-03-17 00:54:19.721598 | orchestrator | [WARNING]: Skipped 2026-03-17 00:54:19.721609 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-17 00:54:19.721626 | orchestrator | to this access issue: 2026-03-17 00:54:19.721635 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-17 00:54:19.721643 | orchestrator | directory 2026-03-17 00:54:19.721651 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:54:19.721659 | orchestrator | 2026-03-17 00:54:19.721667 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-17 00:54:19.721675 | orchestrator | Tuesday 17 March 2026 00:52:29 +0000 (0:00:01.079) 0:00:32.798 ********* 2026-03-17 00:54:19.721683 | orchestrator | [WARNING]: Skipped 2026-03-17 00:54:19.721691 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-17 00:54:19.721699 | orchestrator | to this access issue: 2026-03-17 00:54:19.721711 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-17 00:54:19.721725 | orchestrator | directory 2026-03-17 00:54:19.721738 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:54:19.721751 | orchestrator | 2026-03-17 00:54:19.721764 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-17 00:54:19.721778 | orchestrator | Tuesday 17 March 2026 00:52:30 +0000 (0:00:01.286) 0:00:34.085 ********* 2026-03-17 00:54:19.721792 | orchestrator | [WARNING]: Skipped 2026-03-17 00:54:19.721806 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-17 00:54:19.721820 | orchestrator | to this access issue: 2026-03-17 00:54:19.721835 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-17 00:54:19.721850 | orchestrator | directory 2026-03-17 00:54:19.721865 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:54:19.721880 | orchestrator | 2026-03-17 00:54:19.721903 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-17 00:54:19.721912 | orchestrator | Tuesday 17 March 2026 00:52:31 +0000 (0:00:00.982) 0:00:35.068 ********* 2026-03-17 00:54:19.721920 | orchestrator | [WARNING]: Skipped 2026-03-17 00:54:19.721928 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-17 00:54:19.721936 | orchestrator | to this access issue: 2026-03-17 00:54:19.721943 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-17 00:54:19.721951 | orchestrator | directory 2026-03-17 00:54:19.721959 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 00:54:19.721967 | orchestrator | 2026-03-17 00:54:19.721980 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-17 00:54:19.721989 | orchestrator | Tuesday 17 March 2026 00:52:32 +0000 (0:00:00.835) 0:00:35.904 ********* 2026-03-17 00:54:19.721996 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:19.722004 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:19.722043 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.722053 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:19.722061 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:19.722069 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:19.722077 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:19.722213 | orchestrator | 2026-03-17 00:54:19.722251 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-17 00:54:19.722290 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:04.865) 0:00:40.769 ********* 2026-03-17 00:54:19.722306 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:54:19.722321 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:54:19.722335 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:54:19.722349 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:54:19.722364 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:54:19.722377 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:54:19.722391 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-17 00:54:19.722405 | orchestrator | 2026-03-17 00:54:19.722419 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-17 00:54:19.722433 | orchestrator | Tuesday 17 March 2026 00:52:40 +0000 (0:00:03.374) 0:00:44.144 ********* 2026-03-17 00:54:19.722443 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:19.722452 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:19.722459 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.722467 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:19.722475 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:19.722484 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:19.722491 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:19.722499 | orchestrator | 2026-03-17 00:54:19.722507 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-17 00:54:19.722515 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:02.092) 0:00:46.237 ********* 2026-03-17 00:54:19.722525 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.722543 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.722552 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.722572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.722585 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.722594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.722603 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.722612 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.722621 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.722638 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.722647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.722655 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.722675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.722695 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.722715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.722728 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.722751 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.722764 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.722778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.722791 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.722813 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.722828 | orchestrator | 2026-03-17 00:54:19.722847 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-17 00:54:19.722858 | orchestrator | Tuesday 17 March 2026 00:52:45 +0000 (0:00:02.429) 0:00:48.667 ********* 2026-03-17 00:54:19.722866 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:54:19.722874 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:54:19.722882 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:54:19.722890 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:54:19.722898 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:54:19.722906 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:54:19.722914 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-17 00:54:19.722922 | orchestrator | 2026-03-17 00:54:19.722929 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-17 00:54:19.722937 | orchestrator | Tuesday 17 March 2026 00:52:48 +0000 (0:00:02.769) 0:00:51.436 ********* 2026-03-17 00:54:19.722951 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:54:19.722959 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:54:19.722967 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:54:19.722975 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:54:19.722983 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:54:19.722991 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:54:19.722999 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-17 00:54:19.723007 | orchestrator | 2026-03-17 00:54:19.723015 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-17 00:54:19.723023 | orchestrator | Tuesday 17 March 2026 00:52:51 +0000 (0:00:02.874) 0:00:54.310 ********* 2026-03-17 00:54:19.723031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.723040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.723048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.723066 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.723078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.723152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723185 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723193 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.723206 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-17 00:54:19.723228 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723271 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723301 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:54:19.723323 | orchestrator | 2026-03-17 00:54:19.723331 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-17 00:54:19.723339 | orchestrator | Tuesday 17 March 2026 00:52:54 +0000 (0:00:02.989) 0:00:57.300 ********* 2026-03-17 00:54:19.723347 | orchestrator | changed: [testbed-manager] => { 2026-03-17 00:54:19.723356 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:19.723364 | orchestrator | } 2026-03-17 00:54:19.723372 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 00:54:19.723380 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:19.723388 | orchestrator | } 2026-03-17 00:54:19.723396 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:54:19.723404 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:19.723412 | orchestrator | } 2026-03-17 00:54:19.723420 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:54:19.723428 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:19.723436 | orchestrator | } 2026-03-17 00:54:19.723443 | orchestrator | changed: [testbed-node-3] => { 2026-03-17 00:54:19.723451 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:19.723459 | orchestrator | } 2026-03-17 00:54:19.723467 | orchestrator | changed: [testbed-node-4] => { 2026-03-17 00:54:19.723475 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:19.723483 | orchestrator | } 2026-03-17 00:54:19.723491 | orchestrator | changed: [testbed-node-5] => { 2026-03-17 00:54:19.723499 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:19.723507 | orchestrator | } 2026-03-17 00:54:19.723515 | orchestrator | 2026-03-17 00:54:19.723523 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 00:54:19.723531 | orchestrator | Tuesday 17 March 2026 00:52:54 +0000 (0:00:00.751) 0:00:58.051 ********* 2026-03-17 00:54:19.723539 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.723548 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723556 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.723588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.723618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723634 | orchestrator | skipping: [testbed-manager] 2026-03-17 00:54:19.723643 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:19.723651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.723659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723681 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:19.723698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.723707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723723 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:19.723731 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:54:19.723739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.723748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723772 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:54:19.723781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-17 00:54:19.723793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:54:19.723814 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:54:19.723822 | orchestrator | 2026-03-17 00:54:19.723830 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-17 00:54:19.723838 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:01.652) 0:00:59.703 ********* 2026-03-17 00:54:19.723846 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.723854 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:19.723862 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:19.723870 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:19.723878 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:19.723886 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:19.723894 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:19.723902 | orchestrator | 2026-03-17 00:54:19.723910 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-17 00:54:19.723918 | orchestrator | Tuesday 17 March 2026 00:52:57 +0000 (0:00:01.450) 0:01:01.154 ********* 2026-03-17 00:54:19.723926 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:19.723934 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.723941 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:19.723949 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:19.723957 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:19.723965 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:19.723973 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:19.723981 | orchestrator | 2026-03-17 00:54:19.723989 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:54:19.723997 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:01.211) 0:01:02.366 ********* 2026-03-17 00:54:19.724005 | orchestrator | 2026-03-17 00:54:19.724013 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:54:19.724021 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.063) 0:01:02.430 ********* 2026-03-17 00:54:19.724029 | orchestrator | 2026-03-17 00:54:19.724037 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:54:19.724044 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.058) 0:01:02.488 ********* 2026-03-17 00:54:19.724052 | orchestrator | 2026-03-17 00:54:19.724060 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:54:19.724073 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.060) 0:01:02.548 ********* 2026-03-17 00:54:19.724081 | orchestrator | 2026-03-17 00:54:19.724107 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:54:19.724115 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.059) 0:01:02.608 ********* 2026-03-17 00:54:19.724123 | orchestrator | 2026-03-17 00:54:19.724130 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:54:19.724138 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.059) 0:01:02.668 ********* 2026-03-17 00:54:19.724146 | orchestrator | 2026-03-17 00:54:19.724154 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-17 00:54:19.724162 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.059) 0:01:02.727 ********* 2026-03-17 00:54:19.724170 | orchestrator | 2026-03-17 00:54:19.724177 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-17 00:54:19.724185 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.080) 0:01:02.807 ********* 2026-03-17 00:54:19.724193 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:19.724201 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:19.724209 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:19.724217 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:19.724225 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:19.724232 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:19.724240 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.724248 | orchestrator | 2026-03-17 00:54:19.724256 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-17 00:54:19.724264 | orchestrator | Tuesday 17 March 2026 00:53:29 +0000 (0:00:30.234) 0:01:33.041 ********* 2026-03-17 00:54:19.724271 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:19.724279 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.724287 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:19.724295 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:19.724302 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:19.724310 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:19.724318 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:19.724326 | orchestrator | 2026-03-17 00:54:19.724334 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-17 00:54:19.724342 | orchestrator | Tuesday 17 March 2026 00:54:07 +0000 (0:00:37.304) 0:02:10.346 ********* 2026-03-17 00:54:19.724350 | orchestrator | ok: [testbed-manager] 2026-03-17 00:54:19.724364 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:19.724377 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:19.724410 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:19.724423 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:54:19.724436 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:54:19.724448 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:54:19.724461 | orchestrator | 2026-03-17 00:54:19.724474 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-17 00:54:19.724487 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:02.056) 0:02:12.402 ********* 2026-03-17 00:54:19.724508 | orchestrator | changed: [testbed-manager] 2026-03-17 00:54:19.724522 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:19.724535 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:19.724549 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:54:19.724563 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:54:19.724576 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:54:19.724590 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:19.724603 | orchestrator | 2026-03-17 00:54:19.724614 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:54:19.724627 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:54:19.724636 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:54:19.724651 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:54:19.724659 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:54:19.724667 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:54:19.724675 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:54:19.724684 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:54:19.724691 | orchestrator | 2026-03-17 00:54:19.724699 | orchestrator | 2026-03-17 00:54:19.724707 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:54:19.724715 | orchestrator | Tuesday 17 March 2026 00:54:18 +0000 (0:00:09.828) 0:02:22.231 ********* 2026-03-17 00:54:19.724723 | orchestrator | =============================================================================== 2026-03-17 00:54:19.724731 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 37.30s 2026-03-17 00:54:19.724739 | orchestrator | common : Restart fluentd container ------------------------------------- 30.23s 2026-03-17 00:54:19.724747 | orchestrator | common : Restart cron container ----------------------------------------- 9.83s 2026-03-17 00:54:19.724755 | orchestrator | common : Copying over config.json files for services -------------------- 5.14s 2026-03-17 00:54:19.724763 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.93s 2026-03-17 00:54:19.724771 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.87s 2026-03-17 00:54:19.724779 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.20s 2026-03-17 00:54:19.724787 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.68s 2026-03-17 00:54:19.724795 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.38s 2026-03-17 00:54:19.724802 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.33s 2026-03-17 00:54:19.724828 | orchestrator | service-check-containers : common | Check containers -------------------- 2.99s 2026-03-17 00:54:19.724837 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.87s 2026-03-17 00:54:19.724845 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.77s 2026-03-17 00:54:19.724853 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.68s 2026-03-17 00:54:19.724861 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.43s 2026-03-17 00:54:19.724869 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.09s 2026-03-17 00:54:19.724877 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.06s 2026-03-17 00:54:19.724885 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 1.70s 2026-03-17 00:54:19.724892 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.65s 2026-03-17 00:54:19.724900 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.59s 2026-03-17 00:54:19.724908 | orchestrator | 2026-03-17 00:54:19 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:19.724916 | orchestrator | 2026-03-17 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:22.756754 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:22.756892 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task a64c4815-d76f-4580-8f3d-9ea1ccf2494c is in state STARTED 2026-03-17 00:54:22.759838 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:22.760602 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:22.761224 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:22.761939 | orchestrator | 2026-03-17 00:54:22 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:22.762047 | orchestrator | 2026-03-17 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:25.798487 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:25.798677 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task a64c4815-d76f-4580-8f3d-9ea1ccf2494c is in state STARTED 2026-03-17 00:54:25.799360 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:25.799963 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:25.800565 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:25.801843 | orchestrator | 2026-03-17 00:54:25 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:25.801876 | orchestrator | 2026-03-17 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:28.829830 | orchestrator | 2026-03-17 00:54:28 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:28.829901 | orchestrator | 2026-03-17 00:54:28 | INFO  | Task a64c4815-d76f-4580-8f3d-9ea1ccf2494c is in state STARTED 2026-03-17 00:54:28.830170 | orchestrator | 2026-03-17 00:54:28 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:28.831146 | orchestrator | 2026-03-17 00:54:28 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:28.831488 | orchestrator | 2026-03-17 00:54:28 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:28.832364 | orchestrator | 2026-03-17 00:54:28 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:28.832384 | orchestrator | 2026-03-17 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:31.891215 | orchestrator | 2026-03-17 00:54:31 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:31.891373 | orchestrator | 2026-03-17 00:54:31 | INFO  | Task a64c4815-d76f-4580-8f3d-9ea1ccf2494c is in state STARTED 2026-03-17 00:54:31.892321 | orchestrator | 2026-03-17 00:54:31 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:31.893264 | orchestrator | 2026-03-17 00:54:31 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:31.893713 | orchestrator | 2026-03-17 00:54:31 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:31.894681 | orchestrator | 2026-03-17 00:54:31 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:31.894762 | orchestrator | 2026-03-17 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:34.925747 | orchestrator | 2026-03-17 00:54:34 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:34.925836 | orchestrator | 2026-03-17 00:54:34 | INFO  | Task a64c4815-d76f-4580-8f3d-9ea1ccf2494c is in state STARTED 2026-03-17 00:54:34.925974 | orchestrator | 2026-03-17 00:54:34 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:34.947480 | orchestrator | 2026-03-17 00:54:34 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:34.947562 | orchestrator | 2026-03-17 00:54:34 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:34.947574 | orchestrator | 2026-03-17 00:54:34 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:34.947583 | orchestrator | 2026-03-17 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:37.966936 | orchestrator | 2026-03-17 00:54:37 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:37.967023 | orchestrator | 2026-03-17 00:54:37 | INFO  | Task a64c4815-d76f-4580-8f3d-9ea1ccf2494c is in state SUCCESS 2026-03-17 00:54:37.967808 | orchestrator | 2026-03-17 00:54:37.967906 | orchestrator | 2026-03-17 00:54:37.967931 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:54:37.967971 | orchestrator | 2026-03-17 00:54:37.968007 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:54:37.968028 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.572) 0:00:00.572 ********* 2026-03-17 00:54:37.968048 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:37.968173 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:37.968189 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:37.968200 | orchestrator | 2026-03-17 00:54:37.968212 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:54:37.968224 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.489) 0:00:01.061 ********* 2026-03-17 00:54:37.968237 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-17 00:54:37.968248 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-17 00:54:37.968259 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-17 00:54:37.968475 | orchestrator | 2026-03-17 00:54:37.968495 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-17 00:54:37.968507 | orchestrator | 2026-03-17 00:54:37.968535 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-17 00:54:37.968548 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:00.805) 0:00:01.867 ********* 2026-03-17 00:54:37.968560 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:37.968572 | orchestrator | 2026-03-17 00:54:37.968583 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-17 00:54:37.968594 | orchestrator | Tuesday 17 March 2026 00:54:25 +0000 (0:00:00.936) 0:00:02.804 ********* 2026-03-17 00:54:37.968605 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-17 00:54:37.968617 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-17 00:54:37.968629 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-17 00:54:37.968641 | orchestrator | 2026-03-17 00:54:37.968652 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-17 00:54:37.968663 | orchestrator | Tuesday 17 March 2026 00:54:26 +0000 (0:00:01.478) 0:00:04.282 ********* 2026-03-17 00:54:37.968675 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-17 00:54:37.968686 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-17 00:54:37.968697 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-17 00:54:37.968709 | orchestrator | 2026-03-17 00:54:37.968720 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-17 00:54:37.968731 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:02.278) 0:00:06.561 ********* 2026-03-17 00:54:37.968749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 00:54:37.968790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 00:54:37.968827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 00:54:37.968840 | orchestrator | 2026-03-17 00:54:37.968853 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-17 00:54:37.968873 | orchestrator | Tuesday 17 March 2026 00:54:30 +0000 (0:00:01.706) 0:00:08.267 ********* 2026-03-17 00:54:37.968898 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 00:54:37.968923 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:37.969118 | orchestrator | } 2026-03-17 00:54:37.969142 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:54:37.969156 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:37.969170 | orchestrator | } 2026-03-17 00:54:37.969183 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:54:37.969196 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:37.969209 | orchestrator | } 2026-03-17 00:54:37.969222 | orchestrator | 2026-03-17 00:54:37.969234 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 00:54:37.969247 | orchestrator | Tuesday 17 March 2026 00:54:31 +0000 (0:00:00.417) 0:00:08.685 ********* 2026-03-17 00:54:37.969269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 00:54:37.969295 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:37.969308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 00:54:37.969321 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:37.969334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 00:54:37.969348 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:37.969360 | orchestrator | 2026-03-17 00:54:37.969373 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-17 00:54:37.969385 | orchestrator | Tuesday 17 March 2026 00:54:33 +0000 (0:00:02.282) 0:00:10.967 ********* 2026-03-17 00:54:37.969396 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:37.969407 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:37.969418 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:37.969429 | orchestrator | 2026-03-17 00:54:37.969440 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:54:37.969453 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:54:37.969465 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:54:37.969476 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:54:37.969487 | orchestrator | 2026-03-17 00:54:37.969498 | orchestrator | 2026-03-17 00:54:37.969509 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:54:37.969520 | orchestrator | Tuesday 17 March 2026 00:54:36 +0000 (0:00:03.265) 0:00:14.233 ********* 2026-03-17 00:54:37.969543 | orchestrator | =============================================================================== 2026-03-17 00:54:37.969555 | orchestrator | memcached : Restart memcached container --------------------------------- 3.26s 2026-03-17 00:54:37.969566 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.29s 2026-03-17 00:54:37.969577 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.28s 2026-03-17 00:54:37.969588 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.71s 2026-03-17 00:54:37.969599 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.48s 2026-03-17 00:54:37.969610 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.94s 2026-03-17 00:54:37.969621 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-03-17 00:54:37.969638 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2026-03-17 00:54:37.969650 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.42s 2026-03-17 00:54:37.969812 | orchestrator | 2026-03-17 00:54:37 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:37.969828 | orchestrator | 2026-03-17 00:54:37 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:37.969841 | orchestrator | 2026-03-17 00:54:37 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:37.969868 | orchestrator | 2026-03-17 00:54:37 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:37.969888 | orchestrator | 2026-03-17 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:41.019615 | orchestrator | 2026-03-17 00:54:41 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:41.020710 | orchestrator | 2026-03-17 00:54:41 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:54:41.021801 | orchestrator | 2026-03-17 00:54:41 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:41.022230 | orchestrator | 2026-03-17 00:54:41 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:41.022850 | orchestrator | 2026-03-17 00:54:41 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:41.023757 | orchestrator | 2026-03-17 00:54:41 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:41.023914 | orchestrator | 2026-03-17 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:44.056391 | orchestrator | 2026-03-17 00:54:44 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:44.056462 | orchestrator | 2026-03-17 00:54:44 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:54:44.057247 | orchestrator | 2026-03-17 00:54:44 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:44.059407 | orchestrator | 2026-03-17 00:54:44 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:44.060230 | orchestrator | 2026-03-17 00:54:44 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:44.061117 | orchestrator | 2026-03-17 00:54:44 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:44.061148 | orchestrator | 2026-03-17 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:47.108687 | orchestrator | 2026-03-17 00:54:47 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:47.108755 | orchestrator | 2026-03-17 00:54:47 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:54:47.108761 | orchestrator | 2026-03-17 00:54:47 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:47.111623 | orchestrator | 2026-03-17 00:54:47 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:47.111678 | orchestrator | 2026-03-17 00:54:47 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:47.111683 | orchestrator | 2026-03-17 00:54:47 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:47.111688 | orchestrator | 2026-03-17 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:50.172245 | orchestrator | 2026-03-17 00:54:50 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:50.172401 | orchestrator | 2026-03-17 00:54:50 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:54:50.173175 | orchestrator | 2026-03-17 00:54:50 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state STARTED 2026-03-17 00:54:50.173826 | orchestrator | 2026-03-17 00:54:50 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:50.174270 | orchestrator | 2026-03-17 00:54:50 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:50.174993 | orchestrator | 2026-03-17 00:54:50 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:50.175038 | orchestrator | 2026-03-17 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:53.205981 | orchestrator | 2026-03-17 00:54:53 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:53.207516 | orchestrator | 2026-03-17 00:54:53 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:54:53.208285 | orchestrator | 2026-03-17 00:54:53 | INFO  | Task a645b5dc-e60a-4743-a703-b8a0da39ba6b is in state SUCCESS 2026-03-17 00:54:53.209313 | orchestrator | 2026-03-17 00:54:53.209358 | orchestrator | 2026-03-17 00:54:53.209367 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:54:53.209374 | orchestrator | 2026-03-17 00:54:53.209380 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:54:53.209387 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:00.563) 0:00:00.563 ********* 2026-03-17 00:54:53.209393 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:54:53.209399 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:54:53.209405 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:54:53.209411 | orchestrator | 2026-03-17 00:54:53.209417 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:54:53.209423 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.289) 0:00:00.852 ********* 2026-03-17 00:54:53.209429 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-17 00:54:53.209435 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-17 00:54:53.209441 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-17 00:54:53.209447 | orchestrator | 2026-03-17 00:54:53.209453 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-17 00:54:53.209458 | orchestrator | 2026-03-17 00:54:53.209464 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-17 00:54:53.209470 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.306) 0:00:01.159 ********* 2026-03-17 00:54:53.209476 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:54:53.209482 | orchestrator | 2026-03-17 00:54:53.209488 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-17 00:54:53.209494 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:01.403) 0:00:02.563 ********* 2026-03-17 00:54:53.209502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209566 | orchestrator | 2026-03-17 00:54:53.209572 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-17 00:54:53.209578 | orchestrator | Tuesday 17 March 2026 00:54:26 +0000 (0:00:01.689) 0:00:04.252 ********* 2026-03-17 00:54:53.209585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209642 | orchestrator | 2026-03-17 00:54:53.209648 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-17 00:54:53.209654 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:03.006) 0:00:07.259 ********* 2026-03-17 00:54:53.209661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209708 | orchestrator | 2026-03-17 00:54:53.209714 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-17 00:54:53.209720 | orchestrator | Tuesday 17 March 2026 00:54:33 +0000 (0:00:03.761) 0:00:11.021 ********* 2026-03-17 00:54:53.209726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-17 00:54:53.209774 | orchestrator | 2026-03-17 00:54:53.209780 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-17 00:54:53.209786 | orchestrator | Tuesday 17 March 2026 00:54:36 +0000 (0:00:02.985) 0:00:14.007 ********* 2026-03-17 00:54:53.209792 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 00:54:53.209798 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:53.209805 | orchestrator | } 2026-03-17 00:54:53.209811 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:54:53.209817 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:53.209822 | orchestrator | } 2026-03-17 00:54:53.209828 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:54:53.209834 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:54:53.209840 | orchestrator | } 2026-03-17 00:54:53.209847 | orchestrator | 2026-03-17 00:54:53.209853 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 00:54:53.209929 | orchestrator | Tuesday 17 March 2026 00:54:37 +0000 (0:00:01.554) 0:00:15.562 ********* 2026-03-17 00:54:53.209939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-17 00:54:53.209946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-17 00:54:53.209952 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:54:53.209958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-17 00:54:53.209966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-17 00:54:53.209977 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:54:53.209987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-17 00:54:53.210008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-17 00:54:53.210103 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:54:53.210115 | orchestrator | 2026-03-17 00:54:53.210123 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-17 00:54:53.210129 | orchestrator | Tuesday 17 March 2026 00:54:38 +0000 (0:00:00.820) 0:00:16.382 ********* 2026-03-17 00:54:53.210135 | orchestrator | 2026-03-17 00:54:53.210142 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-17 00:54:53.210148 | orchestrator | Tuesday 17 March 2026 00:54:38 +0000 (0:00:00.068) 0:00:16.451 ********* 2026-03-17 00:54:53.210154 | orchestrator | 2026-03-17 00:54:53.210160 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-17 00:54:53.210166 | orchestrator | Tuesday 17 March 2026 00:54:38 +0000 (0:00:00.063) 0:00:16.515 ********* 2026-03-17 00:54:53.210173 | orchestrator | 2026-03-17 00:54:53.210179 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-17 00:54:53.210185 | orchestrator | Tuesday 17 March 2026 00:54:38 +0000 (0:00:00.076) 0:00:16.592 ********* 2026-03-17 00:54:53.210191 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:53.210198 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:53.210204 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:53.210210 | orchestrator | 2026-03-17 00:54:53.210217 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-17 00:54:53.210223 | orchestrator | Tuesday 17 March 2026 00:54:47 +0000 (0:00:08.261) 0:00:24.853 ********* 2026-03-17 00:54:53.210229 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:54:53.210236 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:54:53.210242 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:54:53.210248 | orchestrator | 2026-03-17 00:54:53.210254 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:54:53.210261 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:54:53.210269 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:54:53.210275 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:54:53.210282 | orchestrator | 2026-03-17 00:54:53.210288 | orchestrator | 2026-03-17 00:54:53.210295 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:54:53.210301 | orchestrator | Tuesday 17 March 2026 00:54:51 +0000 (0:00:03.938) 0:00:28.792 ********* 2026-03-17 00:54:53.210308 | orchestrator | =============================================================================== 2026-03-17 00:54:53.210314 | orchestrator | redis : Restart redis container ----------------------------------------- 8.26s 2026-03-17 00:54:53.210321 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.94s 2026-03-17 00:54:53.210327 | orchestrator | redis : Copying over redis config files --------------------------------- 3.76s 2026-03-17 00:54:53.210334 | orchestrator | redis : Copying over default config.json files -------------------------- 3.01s 2026-03-17 00:54:53.210341 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.99s 2026-03-17 00:54:53.210347 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.69s 2026-03-17 00:54:53.210353 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.55s 2026-03-17 00:54:53.210360 | orchestrator | redis : include_tasks --------------------------------------------------- 1.40s 2026-03-17 00:54:53.210366 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.82s 2026-03-17 00:54:53.210372 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2026-03-17 00:54:53.210379 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-17 00:54:53.210389 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2026-03-17 00:54:53.210396 | orchestrator | 2026-03-17 00:54:53 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:53.210402 | orchestrator | 2026-03-17 00:54:53 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:53.211738 | orchestrator | 2026-03-17 00:54:53 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:53.211782 | orchestrator | 2026-03-17 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:56.246561 | orchestrator | 2026-03-17 00:54:56 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:56.246756 | orchestrator | 2026-03-17 00:54:56 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:54:56.249699 | orchestrator | 2026-03-17 00:54:56 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:56.250079 | orchestrator | 2026-03-17 00:54:56 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:56.250622 | orchestrator | 2026-03-17 00:54:56 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:56.250643 | orchestrator | 2026-03-17 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:54:59.418453 | orchestrator | 2026-03-17 00:54:59 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:54:59.419897 | orchestrator | 2026-03-17 00:54:59 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:54:59.420375 | orchestrator | 2026-03-17 00:54:59 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:54:59.421046 | orchestrator | 2026-03-17 00:54:59 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:54:59.422836 | orchestrator | 2026-03-17 00:54:59 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:54:59.422876 | orchestrator | 2026-03-17 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:02.445314 | orchestrator | 2026-03-17 00:55:02 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:02.445665 | orchestrator | 2026-03-17 00:55:02 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:02.446399 | orchestrator | 2026-03-17 00:55:02 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:02.446905 | orchestrator | 2026-03-17 00:55:02 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:02.447574 | orchestrator | 2026-03-17 00:55:02 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:55:02.447593 | orchestrator | 2026-03-17 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:05.509948 | orchestrator | 2026-03-17 00:55:05 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:05.510598 | orchestrator | 2026-03-17 00:55:05 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:05.512337 | orchestrator | 2026-03-17 00:55:05 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:05.513000 | orchestrator | 2026-03-17 00:55:05 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:05.513741 | orchestrator | 2026-03-17 00:55:05 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:55:05.513831 | orchestrator | 2026-03-17 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:08.558228 | orchestrator | 2026-03-17 00:55:08 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:08.560417 | orchestrator | 2026-03-17 00:55:08 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:08.561563 | orchestrator | 2026-03-17 00:55:08 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:08.562959 | orchestrator | 2026-03-17 00:55:08 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:08.564053 | orchestrator | 2026-03-17 00:55:08 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:55:08.564258 | orchestrator | 2026-03-17 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:11.600266 | orchestrator | 2026-03-17 00:55:11 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:11.600842 | orchestrator | 2026-03-17 00:55:11 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:11.602354 | orchestrator | 2026-03-17 00:55:11 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:11.603555 | orchestrator | 2026-03-17 00:55:11 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:11.606069 | orchestrator | 2026-03-17 00:55:11 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:55:11.606703 | orchestrator | 2026-03-17 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:14.640159 | orchestrator | 2026-03-17 00:55:14 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:14.641065 | orchestrator | 2026-03-17 00:55:14 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:14.641784 | orchestrator | 2026-03-17 00:55:14 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:14.644473 | orchestrator | 2026-03-17 00:55:14 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:14.645259 | orchestrator | 2026-03-17 00:55:14 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:55:14.645283 | orchestrator | 2026-03-17 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:17.679598 | orchestrator | 2026-03-17 00:55:17 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:17.680636 | orchestrator | 2026-03-17 00:55:17 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:17.681493 | orchestrator | 2026-03-17 00:55:17 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:17.682547 | orchestrator | 2026-03-17 00:55:17 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:17.685298 | orchestrator | 2026-03-17 00:55:17 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:55:17.686207 | orchestrator | 2026-03-17 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:20.717191 | orchestrator | 2026-03-17 00:55:20 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:20.717882 | orchestrator | 2026-03-17 00:55:20 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:20.718679 | orchestrator | 2026-03-17 00:55:20 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:20.719478 | orchestrator | 2026-03-17 00:55:20 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:20.720201 | orchestrator | 2026-03-17 00:55:20 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:55:20.721229 | orchestrator | 2026-03-17 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:23.752765 | orchestrator | 2026-03-17 00:55:23 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:23.753876 | orchestrator | 2026-03-17 00:55:23 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:23.755815 | orchestrator | 2026-03-17 00:55:23 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:23.757186 | orchestrator | 2026-03-17 00:55:23 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:23.758467 | orchestrator | 2026-03-17 00:55:23 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:55:23.758496 | orchestrator | 2026-03-17 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:26.796862 | orchestrator | 2026-03-17 00:55:26 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:26.797668 | orchestrator | 2026-03-17 00:55:26 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:26.801326 | orchestrator | 2026-03-17 00:55:26 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:26.803456 | orchestrator | 2026-03-17 00:55:26 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:26.805607 | orchestrator | 2026-03-17 00:55:26 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state STARTED 2026-03-17 00:55:26.805643 | orchestrator | 2026-03-17 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:29.840566 | orchestrator | 2026-03-17 00:55:29 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:29.840974 | orchestrator | 2026-03-17 00:55:29 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:29.841470 | orchestrator | 2026-03-17 00:55:29 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:29.842379 | orchestrator | 2026-03-17 00:55:29 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:29.842958 | orchestrator | 2026-03-17 00:55:29 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:29.844553 | orchestrator | 2026-03-17 00:55:29 | INFO  | Task 4b273bff-d211-4b44-8713-79f25b0991c4 is in state SUCCESS 2026-03-17 00:55:29.844626 | orchestrator | 2026-03-17 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:29.846295 | orchestrator | 2026-03-17 00:55:29.846323 | orchestrator | 2026-03-17 00:55:29.846329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:55:29.846336 | orchestrator | 2026-03-17 00:55:29.846342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:55:29.846352 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.651) 0:00:00.651 ********* 2026-03-17 00:55:29.846359 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:55:29.846369 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:55:29.846375 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:55:29.846381 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:55:29.846387 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:55:29.846393 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:55:29.846401 | orchestrator | 2026-03-17 00:55:29.846407 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:55:29.846413 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:01.150) 0:00:01.802 ********* 2026-03-17 00:55:29.846442 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:55:29.846448 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:55:29.846455 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:55:29.846461 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:55:29.846467 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:55:29.846473 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-17 00:55:29.846478 | orchestrator | 2026-03-17 00:55:29.846485 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-17 00:55:29.846490 | orchestrator | 2026-03-17 00:55:29.846496 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-17 00:55:29.846503 | orchestrator | Tuesday 17 March 2026 00:54:25 +0000 (0:00:00.972) 0:00:02.775 ********* 2026-03-17 00:55:29.846510 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:55:29.846517 | orchestrator | 2026-03-17 00:55:29.846524 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-17 00:55:29.846530 | orchestrator | Tuesday 17 March 2026 00:54:26 +0000 (0:00:01.119) 0:00:03.895 ********* 2026-03-17 00:55:29.846537 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-17 00:55:29.846544 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-17 00:55:29.846549 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-17 00:55:29.846553 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-17 00:55:29.846557 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-17 00:55:29.846560 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-17 00:55:29.846567 | orchestrator | 2026-03-17 00:55:29.846573 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-17 00:55:29.846581 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:02.131) 0:00:06.026 ********* 2026-03-17 00:55:29.846589 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-17 00:55:29.846597 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-17 00:55:29.846602 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-17 00:55:29.846608 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-17 00:55:29.846614 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-17 00:55:29.846620 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-17 00:55:29.846625 | orchestrator | 2026-03-17 00:55:29.846631 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-17 00:55:29.846637 | orchestrator | Tuesday 17 March 2026 00:54:30 +0000 (0:00:02.427) 0:00:08.454 ********* 2026-03-17 00:55:29.846643 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-17 00:55:29.846648 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:55:29.846655 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-17 00:55:29.846661 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:55:29.846666 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-17 00:55:29.846672 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:55:29.846678 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-17 00:55:29.846683 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:55:29.846689 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-17 00:55:29.846695 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:55:29.846702 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-17 00:55:29.846709 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:55:29.846723 | orchestrator | 2026-03-17 00:55:29.846730 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-17 00:55:29.846736 | orchestrator | Tuesday 17 March 2026 00:54:32 +0000 (0:00:01.407) 0:00:09.862 ********* 2026-03-17 00:55:29.846743 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:55:29.846749 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:55:29.846754 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:55:29.846762 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:55:29.846766 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:55:29.846770 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:55:29.846774 | orchestrator | 2026-03-17 00:55:29.846777 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-17 00:55:29.846781 | orchestrator | Tuesday 17 March 2026 00:54:33 +0000 (0:00:01.021) 0:00:10.883 ********* 2026-03-17 00:55:29.846807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846860 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846882 | orchestrator | 2026-03-17 00:55:29.846886 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-17 00:55:29.846890 | orchestrator | Tuesday 17 March 2026 00:54:35 +0000 (0:00:02.561) 0:00:13.447 ********* 2026-03-17 00:55:29.846894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846986 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.846993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847009 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847015 | orchestrator | 2026-03-17 00:55:29.847021 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-17 00:55:29.847028 | orchestrator | Tuesday 17 March 2026 00:54:39 +0000 (0:00:03.756) 0:00:17.204 ********* 2026-03-17 00:55:29.847053 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:55:29.847061 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:55:29.847066 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:55:29.847070 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:55:29.847075 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:55:29.847079 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:55:29.847084 | orchestrator | 2026-03-17 00:55:29.847088 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-17 00:55:29.847093 | orchestrator | Tuesday 17 March 2026 00:54:40 +0000 (0:00:00.990) 0:00:18.195 ********* 2026-03-17 00:55:29.847097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-17 00:55:29.847183 | orchestrator | 2026-03-17 00:55:29.847189 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-17 00:55:29.847195 | orchestrator | Tuesday 17 March 2026 00:54:43 +0000 (0:00:02.994) 0:00:21.190 ********* 2026-03-17 00:55:29.847201 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 00:55:29.847212 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:55:29.847219 | orchestrator | } 2026-03-17 00:55:29.847226 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:55:29.847233 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:55:29.847239 | orchestrator | } 2026-03-17 00:55:29.847246 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:55:29.847251 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:55:29.847255 | orchestrator | } 2026-03-17 00:55:29.847260 | orchestrator | changed: [testbed-node-3] => { 2026-03-17 00:55:29.847264 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:55:29.847268 | orchestrator | } 2026-03-17 00:55:29.847273 | orchestrator | changed: [testbed-node-4] => { 2026-03-17 00:55:29.847277 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:55:29.847280 | orchestrator | } 2026-03-17 00:55:29.847284 | orchestrator | changed: [testbed-node-5] => { 2026-03-17 00:55:29.847288 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:55:29.847292 | orchestrator | } 2026-03-17 00:55:29.847295 | orchestrator | 2026-03-17 00:55:29.847299 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 00:55:29.847303 | orchestrator | Tuesday 17 March 2026 00:54:44 +0000 (0:00:00.777) 0:00:21.967 ********* 2026-03-17 00:55:29.847307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-17 00:55:29.847311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-17 00:55:29.847315 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:55:29.847325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-17 00:55:29.847330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-17 00:55:29.847337 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:55:29.847341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-17 00:55:29.847345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-17 00:55:29.847349 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:55:29.847353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-17 00:55:29.847359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-17 00:55:29.847365 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:55:29.847378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-17 00:55:29.847389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-17 00:55:29.847395 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:55:29.847402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-17 00:55:29.847406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-17 00:55:29.847410 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:55:29.847414 | orchestrator | 2026-03-17 00:55:29.847418 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:55:29.847422 | orchestrator | Tuesday 17 March 2026 00:54:47 +0000 (0:00:02.807) 0:00:24.774 ********* 2026-03-17 00:55:29.847426 | orchestrator | 2026-03-17 00:55:29.847430 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:55:29.847433 | orchestrator | Tuesday 17 March 2026 00:54:47 +0000 (0:00:00.574) 0:00:25.349 ********* 2026-03-17 00:55:29.847437 | orchestrator | 2026-03-17 00:55:29.847441 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:55:29.847445 | orchestrator | Tuesday 17 March 2026 00:54:48 +0000 (0:00:00.316) 0:00:25.666 ********* 2026-03-17 00:55:29.847448 | orchestrator | 2026-03-17 00:55:29.847452 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:55:29.847456 | orchestrator | Tuesday 17 March 2026 00:54:48 +0000 (0:00:00.409) 0:00:26.075 ********* 2026-03-17 00:55:29.847460 | orchestrator | 2026-03-17 00:55:29.847466 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:55:29.847472 | orchestrator | Tuesday 17 March 2026 00:54:49 +0000 (0:00:00.466) 0:00:26.541 ********* 2026-03-17 00:55:29.847478 | orchestrator | 2026-03-17 00:55:29.847485 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-17 00:55:29.847491 | orchestrator | Tuesday 17 March 2026 00:54:49 +0000 (0:00:00.376) 0:00:26.917 ********* 2026-03-17 00:55:29.847497 | orchestrator | 2026-03-17 00:55:29.847504 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-17 00:55:29.847508 | orchestrator | Tuesday 17 March 2026 00:54:49 +0000 (0:00:00.203) 0:00:27.121 ********* 2026-03-17 00:55:29.847516 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:55:29.847520 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:55:29.847526 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:55:29.847532 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:55:29.847541 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:55:29.847549 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:55:29.847554 | orchestrator | 2026-03-17 00:55:29.847563 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-17 00:55:29.847574 | orchestrator | Tuesday 17 March 2026 00:54:54 +0000 (0:00:04.795) 0:00:31.916 ********* 2026-03-17 00:55:29.847580 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:55:29.847586 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:55:29.847592 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:55:29.847598 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:55:29.847603 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:55:29.847609 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:55:29.847614 | orchestrator | 2026-03-17 00:55:29.847620 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-17 00:55:29.847625 | orchestrator | Tuesday 17 March 2026 00:54:55 +0000 (0:00:01.095) 0:00:33.011 ********* 2026-03-17 00:55:29.847631 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:55:29.847637 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:55:29.847642 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:55:29.847647 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:55:29.847652 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:55:29.847659 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:55:29.847665 | orchestrator | 2026-03-17 00:55:29.847671 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-17 00:55:29.847677 | orchestrator | Tuesday 17 March 2026 00:55:04 +0000 (0:00:08.875) 0:00:41.887 ********* 2026-03-17 00:55:29.847684 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-17 00:55:29.847689 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-17 00:55:29.847695 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-17 00:55:29.847700 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-17 00:55:29.847707 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-17 00:55:29.847712 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-17 00:55:29.847718 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-17 00:55:29.847724 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-17 00:55:29.847729 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-17 00:55:29.847735 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-17 00:55:29.847740 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-17 00:55:29.847745 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-17 00:55:29.847752 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:55:29.847757 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:55:29.847763 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:55:29.847774 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:55:29.847780 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:55:29.847786 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-17 00:55:29.847792 | orchestrator | 2026-03-17 00:55:29.847797 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-17 00:55:29.847802 | orchestrator | Tuesday 17 March 2026 00:55:12 +0000 (0:00:07.663) 0:00:49.550 ********* 2026-03-17 00:55:29.847808 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-17 00:55:29.847814 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:55:29.847820 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-17 00:55:29.847827 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:55:29.847834 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-17 00:55:29.847839 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:55:29.847845 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-17 00:55:29.847850 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-17 00:55:29.847856 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-17 00:55:29.847861 | orchestrator | 2026-03-17 00:55:29.847867 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-17 00:55:29.847873 | orchestrator | Tuesday 17 March 2026 00:55:14 +0000 (0:00:02.884) 0:00:52.434 ********* 2026-03-17 00:55:29.847879 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-17 00:55:29.847885 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-17 00:55:29.847891 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:55:29.847897 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:55:29.847903 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-17 00:55:29.847913 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:55:29.847919 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-17 00:55:29.847931 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-17 00:55:29.847937 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-17 00:55:29.847943 | orchestrator | 2026-03-17 00:55:29.847949 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-17 00:55:29.847955 | orchestrator | Tuesday 17 March 2026 00:55:18 +0000 (0:00:03.193) 0:00:55.627 ********* 2026-03-17 00:55:29.847961 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:55:29.847968 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:55:29.847974 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:55:29.847980 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:55:29.847986 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:55:29.847992 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:55:29.847998 | orchestrator | 2026-03-17 00:55:29.848004 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:55:29.848010 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:55:29.848017 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:55:29.848023 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:55:29.848029 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:55:29.848057 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:55:29.848070 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 00:55:29.848078 | orchestrator | 2026-03-17 00:55:29.848082 | orchestrator | 2026-03-17 00:55:29.848085 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:55:29.848089 | orchestrator | Tuesday 17 March 2026 00:55:26 +0000 (0:00:08.495) 0:01:04.123 ********* 2026-03-17 00:55:29.848093 | orchestrator | =============================================================================== 2026-03-17 00:55:29.848097 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.37s 2026-03-17 00:55:29.848101 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.66s 2026-03-17 00:55:29.848105 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 4.80s 2026-03-17 00:55:29.848108 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.76s 2026-03-17 00:55:29.848112 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.19s 2026-03-17 00:55:29.848116 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.00s 2026-03-17 00:55:29.848120 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.88s 2026-03-17 00:55:29.848124 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.81s 2026-03-17 00:55:29.848127 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.56s 2026-03-17 00:55:29.848131 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.43s 2026-03-17 00:55:29.848135 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.35s 2026-03-17 00:55:29.848139 | orchestrator | module-load : Load modules ---------------------------------------------- 2.13s 2026-03-17 00:55:29.848142 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.41s 2026-03-17 00:55:29.848146 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.15s 2026-03-17 00:55:29.848150 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.12s 2026-03-17 00:55:29.848154 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.10s 2026-03-17 00:55:29.848160 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.02s 2026-03-17 00:55:29.848166 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.99s 2026-03-17 00:55:29.848172 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.97s 2026-03-17 00:55:29.848178 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.78s 2026-03-17 00:55:32.888467 | orchestrator | 2026-03-17 00:55:32 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:32.888782 | orchestrator | 2026-03-17 00:55:32 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:32.889665 | orchestrator | 2026-03-17 00:55:32 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:32.890344 | orchestrator | 2026-03-17 00:55:32 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:32.892255 | orchestrator | 2026-03-17 00:55:32 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:32.892292 | orchestrator | 2026-03-17 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:35.918774 | orchestrator | 2026-03-17 00:55:35 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:35.920247 | orchestrator | 2026-03-17 00:55:35 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:35.923079 | orchestrator | 2026-03-17 00:55:35 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:35.925186 | orchestrator | 2026-03-17 00:55:35 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:35.927237 | orchestrator | 2026-03-17 00:55:35 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:35.927275 | orchestrator | 2026-03-17 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:38.976870 | orchestrator | 2026-03-17 00:55:38 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:38.977333 | orchestrator | 2026-03-17 00:55:38 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:38.978865 | orchestrator | 2026-03-17 00:55:38 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:38.979660 | orchestrator | 2026-03-17 00:55:38 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:38.980570 | orchestrator | 2026-03-17 00:55:38 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:38.980599 | orchestrator | 2026-03-17 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:42.053947 | orchestrator | 2026-03-17 00:55:42 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:42.054993 | orchestrator | 2026-03-17 00:55:42 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:42.056330 | orchestrator | 2026-03-17 00:55:42 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:42.056377 | orchestrator | 2026-03-17 00:55:42 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:42.057267 | orchestrator | 2026-03-17 00:55:42 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:42.057309 | orchestrator | 2026-03-17 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:45.098971 | orchestrator | 2026-03-17 00:55:45 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:45.099113 | orchestrator | 2026-03-17 00:55:45 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:45.099126 | orchestrator | 2026-03-17 00:55:45 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:45.099132 | orchestrator | 2026-03-17 00:55:45 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:45.099138 | orchestrator | 2026-03-17 00:55:45 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:45.099146 | orchestrator | 2026-03-17 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:48.124633 | orchestrator | 2026-03-17 00:55:48 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:48.125510 | orchestrator | 2026-03-17 00:55:48 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:48.126846 | orchestrator | 2026-03-17 00:55:48 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:48.127524 | orchestrator | 2026-03-17 00:55:48 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:48.128265 | orchestrator | 2026-03-17 00:55:48 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:48.128336 | orchestrator | 2026-03-17 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:51.164339 | orchestrator | 2026-03-17 00:55:51 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:51.164418 | orchestrator | 2026-03-17 00:55:51 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:51.164733 | orchestrator | 2026-03-17 00:55:51 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:51.165634 | orchestrator | 2026-03-17 00:55:51 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:51.166515 | orchestrator | 2026-03-17 00:55:51 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:51.166558 | orchestrator | 2026-03-17 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:54.224451 | orchestrator | 2026-03-17 00:55:54 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:54.224497 | orchestrator | 2026-03-17 00:55:54 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:54.224502 | orchestrator | 2026-03-17 00:55:54 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:54.224505 | orchestrator | 2026-03-17 00:55:54 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:54.224509 | orchestrator | 2026-03-17 00:55:54 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:54.224512 | orchestrator | 2026-03-17 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:55:57.258864 | orchestrator | 2026-03-17 00:55:57 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:55:57.261986 | orchestrator | 2026-03-17 00:55:57 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:55:57.263727 | orchestrator | 2026-03-17 00:55:57 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:55:57.265667 | orchestrator | 2026-03-17 00:55:57 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:55:57.267441 | orchestrator | 2026-03-17 00:55:57 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:55:57.267491 | orchestrator | 2026-03-17 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:00.316746 | orchestrator | 2026-03-17 00:56:00 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:00.316808 | orchestrator | 2026-03-17 00:56:00 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:00.316816 | orchestrator | 2026-03-17 00:56:00 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:00.316822 | orchestrator | 2026-03-17 00:56:00 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:00.316827 | orchestrator | 2026-03-17 00:56:00 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:56:00.316832 | orchestrator | 2026-03-17 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:03.341900 | orchestrator | 2026-03-17 00:56:03 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:03.341983 | orchestrator | 2026-03-17 00:56:03 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:03.347716 | orchestrator | 2026-03-17 00:56:03 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:03.347770 | orchestrator | 2026-03-17 00:56:03 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:03.347781 | orchestrator | 2026-03-17 00:56:03 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:56:03.347811 | orchestrator | 2026-03-17 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:06.368368 | orchestrator | 2026-03-17 00:56:06 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:06.369227 | orchestrator | 2026-03-17 00:56:06 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:06.371232 | orchestrator | 2026-03-17 00:56:06 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:06.373264 | orchestrator | 2026-03-17 00:56:06 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:06.374530 | orchestrator | 2026-03-17 00:56:06 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:56:06.374565 | orchestrator | 2026-03-17 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:09.415623 | orchestrator | 2026-03-17 00:56:09 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:09.417179 | orchestrator | 2026-03-17 00:56:09 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:09.419131 | orchestrator | 2026-03-17 00:56:09 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:09.420427 | orchestrator | 2026-03-17 00:56:09 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:09.421580 | orchestrator | 2026-03-17 00:56:09 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:56:09.421622 | orchestrator | 2026-03-17 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:12.489364 | orchestrator | 2026-03-17 00:56:12 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:12.490270 | orchestrator | 2026-03-17 00:56:12 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:12.491361 | orchestrator | 2026-03-17 00:56:12 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:12.493492 | orchestrator | 2026-03-17 00:56:12 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:12.495367 | orchestrator | 2026-03-17 00:56:12 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:56:12.495409 | orchestrator | 2026-03-17 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:15.557887 | orchestrator | 2026-03-17 00:56:15 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:15.557955 | orchestrator | 2026-03-17 00:56:15 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:15.557961 | orchestrator | 2026-03-17 00:56:15 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:15.558885 | orchestrator | 2026-03-17 00:56:15 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:15.568945 | orchestrator | 2026-03-17 00:56:15 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:56:15.569016 | orchestrator | 2026-03-17 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:18.636639 | orchestrator | 2026-03-17 00:56:18 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:18.636879 | orchestrator | 2026-03-17 00:56:18 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:18.637509 | orchestrator | 2026-03-17 00:56:18 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:18.637912 | orchestrator | 2026-03-17 00:56:18 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:18.639751 | orchestrator | 2026-03-17 00:56:18 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:56:18.639837 | orchestrator | 2026-03-17 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:21.696209 | orchestrator | 2026-03-17 00:56:21 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:21.698214 | orchestrator | 2026-03-17 00:56:21 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:21.698814 | orchestrator | 2026-03-17 00:56:21 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:21.700494 | orchestrator | 2026-03-17 00:56:21 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:21.701196 | orchestrator | 2026-03-17 00:56:21 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:56:21.703766 | orchestrator | 2026-03-17 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:24.759159 | orchestrator | 2026-03-17 00:56:24 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:24.760717 | orchestrator | 2026-03-17 00:56:24 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:24.761360 | orchestrator | 2026-03-17 00:56:24 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:24.762279 | orchestrator | 2026-03-17 00:56:24 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:24.763132 | orchestrator | 2026-03-17 00:56:24 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state STARTED 2026-03-17 00:56:24.763151 | orchestrator | 2026-03-17 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:27.784847 | orchestrator | 2026-03-17 00:56:27 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:27.785439 | orchestrator | 2026-03-17 00:56:27 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:27.787778 | orchestrator | 2026-03-17 00:56:27 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:27.787859 | orchestrator | 2026-03-17 00:56:27 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:27.790305 | orchestrator | 2026-03-17 00:56:27 | INFO  | Task 7d6eb32c-ea6a-428c-84ca-39493d25e471 is in state SUCCESS 2026-03-17 00:56:27.791014 | orchestrator | 2026-03-17 00:56:27.791067 | orchestrator | 2026-03-17 00:56:27.791075 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-17 00:56:27.791083 | orchestrator | 2026-03-17 00:56:27.791090 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-17 00:56:27.791097 | orchestrator | Tuesday 17 March 2026 00:51:57 +0000 (0:00:00.273) 0:00:00.273 ********* 2026-03-17 00:56:27.791103 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:56:27.791111 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:56:27.791118 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:56:27.791124 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.791131 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.791137 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.791143 | orchestrator | 2026-03-17 00:56:27.791150 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-17 00:56:27.791217 | orchestrator | Tuesday 17 March 2026 00:51:58 +0000 (0:00:00.877) 0:00:01.150 ********* 2026-03-17 00:56:27.791225 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.791233 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.791241 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.791269 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.791276 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.791282 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.791288 | orchestrator | 2026-03-17 00:56:27.791294 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-17 00:56:27.791312 | orchestrator | Tuesday 17 March 2026 00:51:59 +0000 (0:00:01.053) 0:00:02.203 ********* 2026-03-17 00:56:27.791318 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.791325 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.791339 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.791346 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.791352 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.791444 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.791451 | orchestrator | 2026-03-17 00:56:27.791458 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-17 00:56:27.791464 | orchestrator | Tuesday 17 March 2026 00:52:00 +0000 (0:00:00.893) 0:00:03.097 ********* 2026-03-17 00:56:27.791470 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:56:27.791476 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:56:27.791483 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:56:27.791489 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.791496 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.791502 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.791509 | orchestrator | 2026-03-17 00:56:27.791515 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-17 00:56:27.791522 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:01.711) 0:00:04.809 ********* 2026-03-17 00:56:27.791528 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:56:27.791534 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:56:27.791540 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:56:27.791546 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.791552 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.791558 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.791564 | orchestrator | 2026-03-17 00:56:27.791571 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-17 00:56:27.791578 | orchestrator | Tuesday 17 March 2026 00:52:03 +0000 (0:00:01.245) 0:00:06.054 ********* 2026-03-17 00:56:27.791584 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:56:27.791589 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:56:27.791595 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:56:27.791602 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.791608 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.791615 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.791621 | orchestrator | 2026-03-17 00:56:27.791627 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-17 00:56:27.791634 | orchestrator | Tuesday 17 March 2026 00:52:04 +0000 (0:00:00.997) 0:00:07.052 ********* 2026-03-17 00:56:27.791640 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.791646 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.791652 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.791657 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.791662 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.791668 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.791673 | orchestrator | 2026-03-17 00:56:27.791678 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-17 00:56:27.791685 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.801) 0:00:07.853 ********* 2026-03-17 00:56:27.791690 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.791697 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.791703 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.791710 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.791717 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.791722 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.791737 | orchestrator | 2026-03-17 00:56:27.791743 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-17 00:56:27.791748 | orchestrator | Tuesday 17 March 2026 00:52:06 +0000 (0:00:00.600) 0:00:08.454 ********* 2026-03-17 00:56:27.791755 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:56:27.791761 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:56:27.791766 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.791773 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:56:27.791778 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:56:27.791783 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.791789 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:56:27.791796 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:56:27.791802 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.791808 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:56:27.791883 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:56:27.791891 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.791898 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:56:27.791904 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:56:27.791911 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.791917 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 00:56:27.791924 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 00:56:27.791930 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.791937 | orchestrator | 2026-03-17 00:56:27.791943 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-17 00:56:27.791949 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:00.953) 0:00:09.408 ********* 2026-03-17 00:56:27.791956 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.791963 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.791969 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.791976 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.791982 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.791989 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.791995 | orchestrator | 2026-03-17 00:56:27.792001 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-17 00:56:27.792092 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:01.187) 0:00:10.595 ********* 2026-03-17 00:56:27.792098 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:56:27.792105 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:56:27.792111 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:56:27.792118 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.792125 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.792131 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.792137 | orchestrator | 2026-03-17 00:56:27.792144 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-17 00:56:27.792150 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:00.612) 0:00:11.207 ********* 2026-03-17 00:56:27.792157 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:56:27.792163 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:56:27.792170 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:56:27.792176 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.792183 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.792189 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.792196 | orchestrator | 2026-03-17 00:56:27.792202 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-17 00:56:27.792217 | orchestrator | Tuesday 17 March 2026 00:52:14 +0000 (0:00:05.395) 0:00:16.603 ********* 2026-03-17 00:56:27.792224 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.792231 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.792237 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.792244 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.792250 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.792257 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.792264 | orchestrator | 2026-03-17 00:56:27.792270 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-17 00:56:27.792276 | orchestrator | Tuesday 17 March 2026 00:52:15 +0000 (0:00:01.218) 0:00:17.821 ********* 2026-03-17 00:56:27.792281 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.792287 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.792294 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.792300 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.792306 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.792311 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.792317 | orchestrator | 2026-03-17 00:56:27.792835 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-17 00:56:27.792861 | orchestrator | Tuesday 17 March 2026 00:52:17 +0000 (0:00:02.318) 0:00:20.140 ********* 2026-03-17 00:56:27.792869 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.792877 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.792884 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.792890 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.792898 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.792904 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.792910 | orchestrator | 2026-03-17 00:56:27.792917 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-17 00:56:27.792929 | orchestrator | Tuesday 17 March 2026 00:52:19 +0000 (0:00:01.255) 0:00:21.396 ********* 2026-03-17 00:56:27.792937 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-17 00:56:27.792944 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-17 00:56:27.792951 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.792958 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-17 00:56:27.792965 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-17 00:56:27.792972 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.792979 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-17 00:56:27.792986 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-17 00:56:27.792992 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.793000 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-17 00:56:27.793006 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-17 00:56:27.793012 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.793081 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-17 00:56:27.793088 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-17 00:56:27.793095 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.793102 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-17 00:56:27.793109 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-17 00:56:27.793116 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.793122 | orchestrator | 2026-03-17 00:56:27.793129 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-17 00:56:27.793149 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:01.206) 0:00:22.602 ********* 2026-03-17 00:56:27.793156 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.793162 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.793168 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.793183 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.793189 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.793195 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.793202 | orchestrator | 2026-03-17 00:56:27.793208 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-17 00:56:27.793215 | orchestrator | Tuesday 17 March 2026 00:52:22 +0000 (0:00:02.599) 0:00:25.202 ********* 2026-03-17 00:56:27.793222 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.793229 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.793235 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.793241 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.793246 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.793262 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.793275 | orchestrator | 2026-03-17 00:56:27.793281 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-17 00:56:27.793286 | orchestrator | 2026-03-17 00:56:27.793292 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-17 00:56:27.793298 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:01.565) 0:00:26.768 ********* 2026-03-17 00:56:27.793303 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.793309 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.793315 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.793320 | orchestrator | 2026-03-17 00:56:27.793326 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-17 00:56:27.793332 | orchestrator | Tuesday 17 March 2026 00:52:25 +0000 (0:00:00.904) 0:00:27.672 ********* 2026-03-17 00:56:27.793338 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.793343 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.793350 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.793355 | orchestrator | 2026-03-17 00:56:27.793361 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-17 00:56:27.793367 | orchestrator | Tuesday 17 March 2026 00:52:26 +0000 (0:00:01.213) 0:00:28.885 ********* 2026-03-17 00:56:27.793373 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.793379 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.793385 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.793391 | orchestrator | 2026-03-17 00:56:27.793397 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-17 00:56:27.793402 | orchestrator | Tuesday 17 March 2026 00:52:27 +0000 (0:00:00.875) 0:00:29.761 ********* 2026-03-17 00:56:27.793408 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.793414 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.793419 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.793425 | orchestrator | 2026-03-17 00:56:27.793432 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-17 00:56:27.793438 | orchestrator | Tuesday 17 March 2026 00:52:28 +0000 (0:00:00.843) 0:00:30.604 ********* 2026-03-17 00:56:27.793445 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.793451 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.793457 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.793463 | orchestrator | 2026-03-17 00:56:27.793470 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-17 00:56:27.793477 | orchestrator | Tuesday 17 March 2026 00:52:28 +0000 (0:00:00.313) 0:00:30.918 ********* 2026-03-17 00:56:27.793483 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.793490 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.793496 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.793503 | orchestrator | 2026-03-17 00:56:27.793509 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-17 00:56:27.793515 | orchestrator | Tuesday 17 March 2026 00:52:29 +0000 (0:00:00.855) 0:00:31.773 ********* 2026-03-17 00:56:27.793521 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.793527 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.793533 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.793546 | orchestrator | 2026-03-17 00:56:27.793552 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-17 00:56:27.793558 | orchestrator | Tuesday 17 March 2026 00:52:31 +0000 (0:00:01.542) 0:00:33.316 ********* 2026-03-17 00:56:27.793565 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:27.793571 | orchestrator | 2026-03-17 00:56:27.793582 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-17 00:56:27.793588 | orchestrator | Tuesday 17 March 2026 00:52:31 +0000 (0:00:00.672) 0:00:33.989 ********* 2026-03-17 00:56:27.793595 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.793601 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.793607 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.793613 | orchestrator | 2026-03-17 00:56:27.793620 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-17 00:56:27.793627 | orchestrator | Tuesday 17 March 2026 00:52:33 +0000 (0:00:02.028) 0:00:36.018 ********* 2026-03-17 00:56:27.793633 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.793639 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.793645 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.793652 | orchestrator | 2026-03-17 00:56:27.793658 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-17 00:56:27.793664 | orchestrator | Tuesday 17 March 2026 00:52:34 +0000 (0:00:00.902) 0:00:36.920 ********* 2026-03-17 00:56:27.793670 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.793676 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.793682 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.793688 | orchestrator | 2026-03-17 00:56:27.793695 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-17 00:56:27.793701 | orchestrator | Tuesday 17 March 2026 00:52:35 +0000 (0:00:01.306) 0:00:38.227 ********* 2026-03-17 00:56:27.793708 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.793715 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.793721 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.793728 | orchestrator | 2026-03-17 00:56:27.793735 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-17 00:56:27.793747 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:01.430) 0:00:39.657 ********* 2026-03-17 00:56:27.793754 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.793761 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.793767 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.793774 | orchestrator | 2026-03-17 00:56:27.793779 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-17 00:56:27.793786 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:00.447) 0:00:40.104 ********* 2026-03-17 00:56:27.793792 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.793798 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.793805 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.793811 | orchestrator | 2026-03-17 00:56:27.793818 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-17 00:56:27.793825 | orchestrator | Tuesday 17 March 2026 00:52:38 +0000 (0:00:00.503) 0:00:40.607 ********* 2026-03-17 00:56:27.793831 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.793838 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.793844 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.793851 | orchestrator | 2026-03-17 00:56:27.793857 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-17 00:56:27.793864 | orchestrator | Tuesday 17 March 2026 00:52:40 +0000 (0:00:02.344) 0:00:42.952 ********* 2026-03-17 00:56:27.793870 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.793877 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.793884 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.793890 | orchestrator | 2026-03-17 00:56:27.793896 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-17 00:56:27.793908 | orchestrator | Tuesday 17 March 2026 00:52:42 +0000 (0:00:02.151) 0:00:45.103 ********* 2026-03-17 00:56:27.793915 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.793921 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.793928 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.793934 | orchestrator | 2026-03-17 00:56:27.793941 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-17 00:56:27.793948 | orchestrator | Tuesday 17 March 2026 00:52:43 +0000 (0:00:00.385) 0:00:45.489 ********* 2026-03-17 00:56:27.793954 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-17 00:56:27.793963 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-17 00:56:27.793970 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-17 00:56:27.793977 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-17 00:56:27.793983 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-17 00:56:27.793990 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-17 00:56:27.793996 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-17 00:56:27.794003 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-17 00:56:27.794009 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-17 00:56:27.794107 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-17 00:56:27.794120 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-17 00:56:27.794127 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-17 00:56:27.794133 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.794140 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.794147 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.794154 | orchestrator | 2026-03-17 00:56:27.794162 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-17 00:56:27.794169 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:42.995) 0:01:28.485 ********* 2026-03-17 00:56:27.794176 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.794183 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.794190 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.794197 | orchestrator | 2026-03-17 00:56:27.794203 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-17 00:56:27.794211 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:00.648) 0:01:29.134 ********* 2026-03-17 00:56:27.794218 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.794225 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.794231 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.794238 | orchestrator | 2026-03-17 00:56:27.794245 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-17 00:56:27.794252 | orchestrator | Tuesday 17 March 2026 00:53:28 +0000 (0:00:01.254) 0:01:30.388 ********* 2026-03-17 00:56:27.794259 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.794266 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.794279 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.794286 | orchestrator | 2026-03-17 00:56:27.794301 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-17 00:56:27.794308 | orchestrator | Tuesday 17 March 2026 00:53:29 +0000 (0:00:01.300) 0:01:31.689 ********* 2026-03-17 00:56:27.794315 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.794322 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.794329 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.794336 | orchestrator | 2026-03-17 00:56:27.794343 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-17 00:56:27.794350 | orchestrator | Tuesday 17 March 2026 00:53:53 +0000 (0:00:24.357) 0:01:56.046 ********* 2026-03-17 00:56:27.794357 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.794364 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.794370 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.794377 | orchestrator | 2026-03-17 00:56:27.794383 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-17 00:56:27.794390 | orchestrator | Tuesday 17 March 2026 00:53:54 +0000 (0:00:00.732) 0:01:56.779 ********* 2026-03-17 00:56:27.794397 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.794404 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.794411 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.794418 | orchestrator | 2026-03-17 00:56:27.794425 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-17 00:56:27.794432 | orchestrator | Tuesday 17 March 2026 00:53:55 +0000 (0:00:01.039) 0:01:57.818 ********* 2026-03-17 00:56:27.794438 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.794445 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.794451 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.794458 | orchestrator | 2026-03-17 00:56:27.794465 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-17 00:56:27.794472 | orchestrator | Tuesday 17 March 2026 00:53:56 +0000 (0:00:00.591) 0:01:58.409 ********* 2026-03-17 00:56:27.794479 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.794486 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.794493 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.794500 | orchestrator | 2026-03-17 00:56:27.794507 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-17 00:56:27.794513 | orchestrator | Tuesday 17 March 2026 00:53:56 +0000 (0:00:00.654) 0:01:59.064 ********* 2026-03-17 00:56:27.794520 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.794528 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.794535 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.794542 | orchestrator | 2026-03-17 00:56:27.794549 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-17 00:56:27.794556 | orchestrator | Tuesday 17 March 2026 00:53:57 +0000 (0:00:00.301) 0:01:59.366 ********* 2026-03-17 00:56:27.794563 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.794570 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.794577 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.794583 | orchestrator | 2026-03-17 00:56:27.794589 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-17 00:56:27.794596 | orchestrator | Tuesday 17 March 2026 00:53:57 +0000 (0:00:00.809) 0:02:00.175 ********* 2026-03-17 00:56:27.794602 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.794608 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.794614 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.794620 | orchestrator | 2026-03-17 00:56:27.794626 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-17 00:56:27.794639 | orchestrator | Tuesday 17 March 2026 00:53:58 +0000 (0:00:00.551) 0:02:00.727 ********* 2026-03-17 00:56:27.794646 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.794651 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.794658 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.794664 | orchestrator | 2026-03-17 00:56:27.794669 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-17 00:56:27.794683 | orchestrator | Tuesday 17 March 2026 00:53:59 +0000 (0:00:00.745) 0:02:01.472 ********* 2026-03-17 00:56:27.794689 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:56:27.794695 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:56:27.794700 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:56:27.794706 | orchestrator | 2026-03-17 00:56:27.794712 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-17 00:56:27.794718 | orchestrator | Tuesday 17 March 2026 00:53:59 +0000 (0:00:00.771) 0:02:02.244 ********* 2026-03-17 00:56:27.794725 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.794731 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.794737 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.794744 | orchestrator | 2026-03-17 00:56:27.794754 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-17 00:56:27.794761 | orchestrator | Tuesday 17 March 2026 00:54:00 +0000 (0:00:00.561) 0:02:02.806 ********* 2026-03-17 00:56:27.794768 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.794775 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.794782 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.794788 | orchestrator | 2026-03-17 00:56:27.794794 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-17 00:56:27.794800 | orchestrator | Tuesday 17 March 2026 00:54:00 +0000 (0:00:00.297) 0:02:03.103 ********* 2026-03-17 00:56:27.794807 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.794813 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.794820 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.794826 | orchestrator | 2026-03-17 00:56:27.794833 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-17 00:56:27.794840 | orchestrator | Tuesday 17 March 2026 00:54:01 +0000 (0:00:00.559) 0:02:03.663 ********* 2026-03-17 00:56:27.794846 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.794852 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.794859 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.794865 | orchestrator | 2026-03-17 00:56:27.794872 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-17 00:56:27.794879 | orchestrator | Tuesday 17 March 2026 00:54:02 +0000 (0:00:00.636) 0:02:04.299 ********* 2026-03-17 00:56:27.794886 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-17 00:56:27.794898 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-17 00:56:27.794905 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-17 00:56:27.794911 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-17 00:56:27.794917 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-17 00:56:27.794924 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-17 00:56:27.794931 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-17 00:56:27.794937 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-17 00:56:27.794944 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-17 00:56:27.794951 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-17 00:56:27.794957 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-17 00:56:27.794964 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-17 00:56:27.794971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-17 00:56:27.794983 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-17 00:56:27.794990 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-17 00:56:27.794996 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-17 00:56:27.795003 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-17 00:56:27.795010 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-17 00:56:27.795038 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-17 00:56:27.795045 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-17 00:56:27.795052 | orchestrator | 2026-03-17 00:56:27.795058 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-17 00:56:27.795065 | orchestrator | 2026-03-17 00:56:27.795071 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-17 00:56:27.795078 | orchestrator | Tuesday 17 March 2026 00:54:04 +0000 (0:00:02.982) 0:02:07.282 ********* 2026-03-17 00:56:27.795085 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:56:27.795092 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:56:27.795098 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:56:27.795105 | orchestrator | 2026-03-17 00:56:27.795112 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-17 00:56:27.795119 | orchestrator | Tuesday 17 March 2026 00:54:05 +0000 (0:00:00.286) 0:02:07.568 ********* 2026-03-17 00:56:27.795125 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:56:27.795131 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:56:27.795138 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:56:27.795144 | orchestrator | 2026-03-17 00:56:27.795151 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-17 00:56:27.795157 | orchestrator | Tuesday 17 March 2026 00:54:05 +0000 (0:00:00.558) 0:02:08.126 ********* 2026-03-17 00:56:27.795164 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:56:27.795171 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:56:27.795178 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:56:27.795184 | orchestrator | 2026-03-17 00:56:27.795191 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-17 00:56:27.795198 | orchestrator | Tuesday 17 March 2026 00:54:06 +0000 (0:00:00.382) 0:02:08.508 ********* 2026-03-17 00:56:27.795208 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:56:27.795215 | orchestrator | 2026-03-17 00:56:27.795222 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-17 00:56:27.795228 | orchestrator | Tuesday 17 March 2026 00:54:06 +0000 (0:00:00.472) 0:02:08.981 ********* 2026-03-17 00:56:27.795235 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.795241 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.795248 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.795255 | orchestrator | 2026-03-17 00:56:27.795261 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-17 00:56:27.795267 | orchestrator | Tuesday 17 March 2026 00:54:06 +0000 (0:00:00.279) 0:02:09.260 ********* 2026-03-17 00:56:27.795274 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.795280 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.795287 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.795294 | orchestrator | 2026-03-17 00:56:27.795300 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-17 00:56:27.795307 | orchestrator | Tuesday 17 March 2026 00:54:07 +0000 (0:00:00.390) 0:02:09.650 ********* 2026-03-17 00:56:27.795314 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.795320 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.795327 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.795341 | orchestrator | 2026-03-17 00:56:27.795348 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-17 00:56:27.795354 | orchestrator | Tuesday 17 March 2026 00:54:07 +0000 (0:00:00.272) 0:02:09.923 ********* 2026-03-17 00:56:27.795361 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:56:27.795367 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:56:27.795373 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:56:27.795380 | orchestrator | 2026-03-17 00:56:27.795392 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-17 00:56:27.795399 | orchestrator | Tuesday 17 March 2026 00:54:08 +0000 (0:00:00.714) 0:02:10.638 ********* 2026-03-17 00:56:27.795405 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:56:27.795412 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:56:27.795418 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:56:27.795425 | orchestrator | 2026-03-17 00:56:27.795431 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-17 00:56:27.795438 | orchestrator | Tuesday 17 March 2026 00:54:09 +0000 (0:00:01.253) 0:02:11.892 ********* 2026-03-17 00:56:27.795445 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:56:27.795451 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:56:27.795458 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:56:27.795465 | orchestrator | 2026-03-17 00:56:27.795471 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-17 00:56:27.795478 | orchestrator | Tuesday 17 March 2026 00:54:11 +0000 (0:00:01.692) 0:02:13.584 ********* 2026-03-17 00:56:27.795485 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:56:27.795491 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:56:27.795497 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:56:27.795503 | orchestrator | 2026-03-17 00:56:27.795510 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-17 00:56:27.795517 | orchestrator | 2026-03-17 00:56:27.795523 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-17 00:56:27.795529 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:10.760) 0:02:24.344 ********* 2026-03-17 00:56:27.795536 | orchestrator | ok: [testbed-manager] 2026-03-17 00:56:27.795543 | orchestrator | 2026-03-17 00:56:27.795549 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-17 00:56:27.795556 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:00.602) 0:02:24.947 ********* 2026-03-17 00:56:27.795563 | orchestrator | changed: [testbed-manager] 2026-03-17 00:56:27.795569 | orchestrator | 2026-03-17 00:56:27.795576 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-17 00:56:27.795583 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.356) 0:02:25.303 ********* 2026-03-17 00:56:27.795590 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-17 00:56:27.795596 | orchestrator | 2026-03-17 00:56:27.795603 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-17 00:56:27.795610 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.534) 0:02:25.837 ********* 2026-03-17 00:56:27.795616 | orchestrator | changed: [testbed-manager] 2026-03-17 00:56:27.795622 | orchestrator | 2026-03-17 00:56:27.795629 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-17 00:56:27.795635 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:00.914) 0:02:26.752 ********* 2026-03-17 00:56:27.795642 | orchestrator | changed: [testbed-manager] 2026-03-17 00:56:27.795649 | orchestrator | 2026-03-17 00:56:27.795655 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-17 00:56:27.795662 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:00.497) 0:02:27.249 ********* 2026-03-17 00:56:27.795668 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:56:27.795674 | orchestrator | 2026-03-17 00:56:27.795680 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-17 00:56:27.795685 | orchestrator | Tuesday 17 March 2026 00:54:26 +0000 (0:00:01.558) 0:02:28.807 ********* 2026-03-17 00:56:27.795700 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:56:27.795706 | orchestrator | 2026-03-17 00:56:27.795712 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-17 00:56:27.795717 | orchestrator | Tuesday 17 March 2026 00:54:27 +0000 (0:00:00.803) 0:02:29.611 ********* 2026-03-17 00:56:27.795723 | orchestrator | changed: [testbed-manager] 2026-03-17 00:56:27.795728 | orchestrator | 2026-03-17 00:56:27.795734 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-17 00:56:27.795739 | orchestrator | Tuesday 17 March 2026 00:54:27 +0000 (0:00:00.465) 0:02:30.076 ********* 2026-03-17 00:56:27.795745 | orchestrator | changed: [testbed-manager] 2026-03-17 00:56:27.795750 | orchestrator | 2026-03-17 00:56:27.795756 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-17 00:56:27.795761 | orchestrator | 2026-03-17 00:56:27.795767 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-17 00:56:27.795776 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:00.385) 0:02:30.461 ********* 2026-03-17 00:56:27.795783 | orchestrator | ok: [testbed-manager] 2026-03-17 00:56:27.795789 | orchestrator | 2026-03-17 00:56:27.795794 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-17 00:56:27.795800 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:00.153) 0:02:30.615 ********* 2026-03-17 00:56:27.795805 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:56:27.795811 | orchestrator | 2026-03-17 00:56:27.795817 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-17 00:56:27.795823 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:00.218) 0:02:30.833 ********* 2026-03-17 00:56:27.795830 | orchestrator | ok: [testbed-manager] 2026-03-17 00:56:27.795836 | orchestrator | 2026-03-17 00:56:27.795843 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-17 00:56:27.795849 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:01.014) 0:02:31.848 ********* 2026-03-17 00:56:27.795856 | orchestrator | ok: [testbed-manager] 2026-03-17 00:56:27.795862 | orchestrator | 2026-03-17 00:56:27.795869 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-17 00:56:27.795875 | orchestrator | Tuesday 17 March 2026 00:54:30 +0000 (0:00:01.412) 0:02:33.260 ********* 2026-03-17 00:56:27.795882 | orchestrator | changed: [testbed-manager] 2026-03-17 00:56:27.795888 | orchestrator | 2026-03-17 00:56:27.795894 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-17 00:56:27.795901 | orchestrator | Tuesday 17 March 2026 00:54:31 +0000 (0:00:00.736) 0:02:33.997 ********* 2026-03-17 00:56:27.795908 | orchestrator | ok: [testbed-manager] 2026-03-17 00:56:27.795914 | orchestrator | 2026-03-17 00:56:27.795926 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-17 00:56:27.795933 | orchestrator | Tuesday 17 March 2026 00:54:32 +0000 (0:00:00.430) 0:02:34.427 ********* 2026-03-17 00:56:27.795940 | orchestrator | changed: [testbed-manager] 2026-03-17 00:56:27.795946 | orchestrator | 2026-03-17 00:56:27.795953 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-17 00:56:27.795959 | orchestrator | Tuesday 17 March 2026 00:54:38 +0000 (0:00:06.357) 0:02:40.785 ********* 2026-03-17 00:56:27.795966 | orchestrator | changed: [testbed-manager] 2026-03-17 00:56:27.795972 | orchestrator | 2026-03-17 00:56:27.795979 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-17 00:56:27.795986 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:13.621) 0:02:54.406 ********* 2026-03-17 00:56:27.795992 | orchestrator | ok: [testbed-manager] 2026-03-17 00:56:27.795999 | orchestrator | 2026-03-17 00:56:27.796006 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-17 00:56:27.796013 | orchestrator | 2026-03-17 00:56:27.796041 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-17 00:56:27.796054 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:00.476) 0:02:54.882 ********* 2026-03-17 00:56:27.796061 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.796067 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.796074 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.796080 | orchestrator | 2026-03-17 00:56:27.796088 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-17 00:56:27.796094 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:00.382) 0:02:55.265 ********* 2026-03-17 00:56:27.796101 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.796107 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.796114 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.796120 | orchestrator | 2026-03-17 00:56:27.796126 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-17 00:56:27.796133 | orchestrator | Tuesday 17 March 2026 00:54:53 +0000 (0:00:00.338) 0:02:55.604 ********* 2026-03-17 00:56:27.796139 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:56:27.796146 | orchestrator | 2026-03-17 00:56:27.796153 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-17 00:56:27.796160 | orchestrator | Tuesday 17 March 2026 00:54:53 +0000 (0:00:00.499) 0:02:56.104 ********* 2026-03-17 00:56:27.796167 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:56:27.796174 | orchestrator | 2026-03-17 00:56:27.796180 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-17 00:56:27.796186 | orchestrator | Tuesday 17 March 2026 00:54:54 +0000 (0:00:00.856) 0:02:56.960 ********* 2026-03-17 00:56:27.796193 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:56:27.796200 | orchestrator | 2026-03-17 00:56:27.796206 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-17 00:56:27.796213 | orchestrator | Tuesday 17 March 2026 00:54:55 +0000 (0:00:00.668) 0:02:57.628 ********* 2026-03-17 00:56:27.796219 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.796226 | orchestrator | 2026-03-17 00:56:27.796232 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-17 00:56:27.796239 | orchestrator | Tuesday 17 March 2026 00:54:55 +0000 (0:00:00.217) 0:02:57.846 ********* 2026-03-17 00:56:27.796246 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:56:27.796252 | orchestrator | 2026-03-17 00:56:27.796259 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-17 00:56:27.796265 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:01.075) 0:02:58.921 ********* 2026-03-17 00:56:27.796272 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.796279 | orchestrator | 2026-03-17 00:56:27.796285 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-17 00:56:27.796292 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:00.107) 0:02:59.028 ********* 2026-03-17 00:56:27.796299 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.796305 | orchestrator | 2026-03-17 00:56:27.796312 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-17 00:56:27.796318 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:00.135) 0:02:59.164 ********* 2026-03-17 00:56:27.796325 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.796331 | orchestrator | 2026-03-17 00:56:27.796342 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-17 00:56:27.796349 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:00.107) 0:02:59.272 ********* 2026-03-17 00:56:27.796355 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.796362 | orchestrator | 2026-03-17 00:56:27.796369 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-17 00:56:27.796375 | orchestrator | Tuesday 17 March 2026 00:54:57 +0000 (0:00:00.137) 0:02:59.410 ********* 2026-03-17 00:56:27.796382 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:56:27.796390 | orchestrator | 2026-03-17 00:56:27.796397 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-17 00:56:27.796407 | orchestrator | Tuesday 17 March 2026 00:55:02 +0000 (0:00:05.455) 0:03:04.865 ********* 2026-03-17 00:56:27.796414 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-17 00:56:27.796420 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-17 00:56:27.796427 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-17 00:56:27.796434 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-17 00:56:27.796441 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-17 00:56:27.796447 | orchestrator | 2026-03-17 00:56:27.796454 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-17 00:56:27.796461 | orchestrator | Tuesday 17 March 2026 00:55:59 +0000 (0:00:56.700) 0:04:01.565 ********* 2026-03-17 00:56:27.796474 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 00:56:27.796480 | orchestrator | 2026-03-17 00:56:27.796487 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-17 00:56:27.796494 | orchestrator | Tuesday 17 March 2026 00:56:00 +0000 (0:00:01.395) 0:04:02.961 ********* 2026-03-17 00:56:27.796501 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:56:27.796508 | orchestrator | 2026-03-17 00:56:27.796514 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-17 00:56:27.796521 | orchestrator | Tuesday 17 March 2026 00:56:02 +0000 (0:00:02.180) 0:04:05.141 ********* 2026-03-17 00:56:27.796528 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 00:56:27.796534 | orchestrator | 2026-03-17 00:56:27.796541 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-17 00:56:27.796548 | orchestrator | Tuesday 17 March 2026 00:56:03 +0000 (0:00:00.951) 0:04:06.093 ********* 2026-03-17 00:56:27.796554 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.796560 | orchestrator | 2026-03-17 00:56:27.796567 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-17 00:56:27.796573 | orchestrator | Tuesday 17 March 2026 00:56:03 +0000 (0:00:00.106) 0:04:06.199 ********* 2026-03-17 00:56:27.796580 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-17 00:56:27.796586 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-17 00:56:27.796593 | orchestrator | 2026-03-17 00:56:27.796599 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-17 00:56:27.796605 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:01.939) 0:04:08.138 ********* 2026-03-17 00:56:27.796612 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.796619 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.796626 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.796632 | orchestrator | 2026-03-17 00:56:27.796639 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-17 00:56:27.796645 | orchestrator | Tuesday 17 March 2026 00:56:06 +0000 (0:00:00.340) 0:04:08.479 ********* 2026-03-17 00:56:27.796652 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.796659 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.796666 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.796673 | orchestrator | 2026-03-17 00:56:27.796679 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-17 00:56:27.796686 | orchestrator | 2026-03-17 00:56:27.796693 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-17 00:56:27.796699 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:00.901) 0:04:09.381 ********* 2026-03-17 00:56:27.796706 | orchestrator | ok: [testbed-manager] 2026-03-17 00:56:27.796713 | orchestrator | 2026-03-17 00:56:27.796719 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-17 00:56:27.796726 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:00.150) 0:04:09.531 ********* 2026-03-17 00:56:27.796738 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-17 00:56:27.796744 | orchestrator | 2026-03-17 00:56:27.796750 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-17 00:56:27.796755 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:00.423) 0:04:09.954 ********* 2026-03-17 00:56:27.796761 | orchestrator | changed: [testbed-manager] 2026-03-17 00:56:27.796767 | orchestrator | 2026-03-17 00:56:27.796772 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-17 00:56:27.796778 | orchestrator | 2026-03-17 00:56:27.796785 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-17 00:56:27.796791 | orchestrator | Tuesday 17 March 2026 00:56:13 +0000 (0:00:05.873) 0:04:15.828 ********* 2026-03-17 00:56:27.796797 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:56:27.796802 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:56:27.796808 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:56:27.796813 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:56:27.796820 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:56:27.796825 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:56:27.796831 | orchestrator | 2026-03-17 00:56:27.796837 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-17 00:56:27.796843 | orchestrator | Tuesday 17 March 2026 00:56:14 +0000 (0:00:00.595) 0:04:16.424 ********* 2026-03-17 00:56:27.796853 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-17 00:56:27.796859 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-17 00:56:27.796865 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-17 00:56:27.796870 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-17 00:56:27.796875 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-17 00:56:27.796881 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-17 00:56:27.796887 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-17 00:56:27.796893 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-17 00:56:27.796899 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-17 00:56:27.796904 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-17 00:56:27.796910 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-17 00:56:27.796917 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-17 00:56:27.796929 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-17 00:56:27.796936 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-17 00:56:27.796943 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-17 00:56:27.796949 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-17 00:56:27.796956 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-17 00:56:27.796962 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-17 00:56:27.796969 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-17 00:56:27.796976 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-17 00:56:27.796982 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-17 00:56:27.796988 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-17 00:56:27.797000 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-17 00:56:27.797006 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-17 00:56:27.797012 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-17 00:56:27.797059 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-17 00:56:27.797068 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-17 00:56:27.797074 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-17 00:56:27.797081 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-17 00:56:27.797087 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-17 00:56:27.797094 | orchestrator | 2026-03-17 00:56:27.797100 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-17 00:56:27.797106 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:10.888) 0:04:27.312 ********* 2026-03-17 00:56:27.797113 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.797119 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.797126 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.797132 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.797139 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.797145 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.797152 | orchestrator | 2026-03-17 00:56:27.797158 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-17 00:56:27.797165 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.390) 0:04:27.703 ********* 2026-03-17 00:56:27.797171 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:56:27.797177 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:56:27.797184 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:56:27.797190 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:56:27.797197 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:56:27.797203 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:56:27.797210 | orchestrator | 2026-03-17 00:56:27.797217 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:56:27.797224 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:56:27.797233 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-17 00:56:27.797241 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-17 00:56:27.797252 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-17 00:56:27.797259 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 00:56:27.797265 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 00:56:27.797273 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-17 00:56:27.797279 | orchestrator | 2026-03-17 00:56:27.797285 | orchestrator | 2026-03-17 00:56:27.797292 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:56:27.797299 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.469) 0:04:28.173 ********* 2026-03-17 00:56:27.797305 | orchestrator | =============================================================================== 2026-03-17 00:56:27.797318 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 56.70s 2026-03-17 00:56:27.797324 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.00s 2026-03-17 00:56:27.797331 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.36s 2026-03-17 00:56:27.797344 | orchestrator | kubectl : Install required packages ------------------------------------ 13.62s 2026-03-17 00:56:27.797351 | orchestrator | Manage labels ---------------------------------------------------------- 10.89s 2026-03-17 00:56:27.797357 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.76s 2026-03-17 00:56:27.797364 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.36s 2026-03-17 00:56:27.797371 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.87s 2026-03-17 00:56:27.797377 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.46s 2026-03-17 00:56:27.797384 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.40s 2026-03-17 00:56:27.797391 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.98s 2026-03-17 00:56:27.797397 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.60s 2026-03-17 00:56:27.797403 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.34s 2026-03-17 00:56:27.797410 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.32s 2026-03-17 00:56:27.797416 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.18s 2026-03-17 00:56:27.797423 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.15s 2026-03-17 00:56:27.797429 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.03s 2026-03-17 00:56:27.797436 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.94s 2026-03-17 00:56:27.797443 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.71s 2026-03-17 00:56:27.797450 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.69s 2026-03-17 00:56:27.797456 | orchestrator | 2026-03-17 00:56:27 | INFO  | Task 54d9580a-0ee2-4d37-a278-850cf6886760 is in state STARTED 2026-03-17 00:56:27.797463 | orchestrator | 2026-03-17 00:56:27 | INFO  | Task 2dc5e73f-3f37-4da5-8af8-d1177c6c08b7 is in state STARTED 2026-03-17 00:56:27.797469 | orchestrator | 2026-03-17 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:30.822605 | orchestrator | 2026-03-17 00:56:30 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:30.822728 | orchestrator | 2026-03-17 00:56:30 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:30.823497 | orchestrator | 2026-03-17 00:56:30 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:30.824505 | orchestrator | 2026-03-17 00:56:30 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:30.825078 | orchestrator | 2026-03-17 00:56:30 | INFO  | Task 54d9580a-0ee2-4d37-a278-850cf6886760 is in state STARTED 2026-03-17 00:56:30.826483 | orchestrator | 2026-03-17 00:56:30 | INFO  | Task 2dc5e73f-3f37-4da5-8af8-d1177c6c08b7 is in state STARTED 2026-03-17 00:56:30.826528 | orchestrator | 2026-03-17 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:33.853660 | orchestrator | 2026-03-17 00:56:33 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:33.853738 | orchestrator | 2026-03-17 00:56:33 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:33.854334 | orchestrator | 2026-03-17 00:56:33 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:33.856072 | orchestrator | 2026-03-17 00:56:33 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:33.856282 | orchestrator | 2026-03-17 00:56:33 | INFO  | Task 54d9580a-0ee2-4d37-a278-850cf6886760 is in state SUCCESS 2026-03-17 00:56:33.856856 | orchestrator | 2026-03-17 00:56:33 | INFO  | Task 2dc5e73f-3f37-4da5-8af8-d1177c6c08b7 is in state STARTED 2026-03-17 00:56:33.856929 | orchestrator | 2026-03-17 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:36.882801 | orchestrator | 2026-03-17 00:56:36 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:36.883347 | orchestrator | 2026-03-17 00:56:36 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:36.884981 | orchestrator | 2026-03-17 00:56:36 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:36.886436 | orchestrator | 2026-03-17 00:56:36 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:36.887288 | orchestrator | 2026-03-17 00:56:36 | INFO  | Task 2dc5e73f-3f37-4da5-8af8-d1177c6c08b7 is in state SUCCESS 2026-03-17 00:56:36.887310 | orchestrator | 2026-03-17 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:39.923156 | orchestrator | 2026-03-17 00:56:39 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:39.923288 | orchestrator | 2026-03-17 00:56:39 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:39.923300 | orchestrator | 2026-03-17 00:56:39 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:39.923318 | orchestrator | 2026-03-17 00:56:39 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:39.923326 | orchestrator | 2026-03-17 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:42.949625 | orchestrator | 2026-03-17 00:56:42 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:42.950115 | orchestrator | 2026-03-17 00:56:42 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:42.950677 | orchestrator | 2026-03-17 00:56:42 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:42.951755 | orchestrator | 2026-03-17 00:56:42 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:42.951825 | orchestrator | 2026-03-17 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:45.985754 | orchestrator | 2026-03-17 00:56:45 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:45.988816 | orchestrator | 2026-03-17 00:56:45 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:45.990599 | orchestrator | 2026-03-17 00:56:45 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:45.992060 | orchestrator | 2026-03-17 00:56:45 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:45.992119 | orchestrator | 2026-03-17 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:49.020826 | orchestrator | 2026-03-17 00:56:49 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:49.022095 | orchestrator | 2026-03-17 00:56:49 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:49.022605 | orchestrator | 2026-03-17 00:56:49 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:49.023441 | orchestrator | 2026-03-17 00:56:49 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:49.023475 | orchestrator | 2026-03-17 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:52.081696 | orchestrator | 2026-03-17 00:56:52 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:52.082126 | orchestrator | 2026-03-17 00:56:52 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:52.082780 | orchestrator | 2026-03-17 00:56:52 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:52.083506 | orchestrator | 2026-03-17 00:56:52 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:52.083551 | orchestrator | 2026-03-17 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:55.103492 | orchestrator | 2026-03-17 00:56:55 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:55.104311 | orchestrator | 2026-03-17 00:56:55 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:55.104907 | orchestrator | 2026-03-17 00:56:55 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:55.105359 | orchestrator | 2026-03-17 00:56:55 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:55.105395 | orchestrator | 2026-03-17 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:56:58.128360 | orchestrator | 2026-03-17 00:56:58 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:56:58.128742 | orchestrator | 2026-03-17 00:56:58 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:56:58.129260 | orchestrator | 2026-03-17 00:56:58 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:56:58.130175 | orchestrator | 2026-03-17 00:56:58 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:56:58.130220 | orchestrator | 2026-03-17 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:01.189795 | orchestrator | 2026-03-17 00:57:01 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:01.189895 | orchestrator | 2026-03-17 00:57:01 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:01.190478 | orchestrator | 2026-03-17 00:57:01 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:01.192951 | orchestrator | 2026-03-17 00:57:01 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:01.193437 | orchestrator | 2026-03-17 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:04.221158 | orchestrator | 2026-03-17 00:57:04 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:04.221455 | orchestrator | 2026-03-17 00:57:04 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:04.222274 | orchestrator | 2026-03-17 00:57:04 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:04.222930 | orchestrator | 2026-03-17 00:57:04 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:04.223073 | orchestrator | 2026-03-17 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:07.248819 | orchestrator | 2026-03-17 00:57:07 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:07.249446 | orchestrator | 2026-03-17 00:57:07 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:07.250128 | orchestrator | 2026-03-17 00:57:07 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:07.250681 | orchestrator | 2026-03-17 00:57:07 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:07.250756 | orchestrator | 2026-03-17 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:10.286654 | orchestrator | 2026-03-17 00:57:10 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:10.288737 | orchestrator | 2026-03-17 00:57:10 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:10.291670 | orchestrator | 2026-03-17 00:57:10 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:10.294553 | orchestrator | 2026-03-17 00:57:10 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:10.294633 | orchestrator | 2026-03-17 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:13.314933 | orchestrator | 2026-03-17 00:57:13 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:13.316392 | orchestrator | 2026-03-17 00:57:13 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:13.316854 | orchestrator | 2026-03-17 00:57:13 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:13.317493 | orchestrator | 2026-03-17 00:57:13 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:13.317522 | orchestrator | 2026-03-17 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:16.343163 | orchestrator | 2026-03-17 00:57:16 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:16.344429 | orchestrator | 2026-03-17 00:57:16 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:16.346668 | orchestrator | 2026-03-17 00:57:16 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:16.349575 | orchestrator | 2026-03-17 00:57:16 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:16.349676 | orchestrator | 2026-03-17 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:19.386377 | orchestrator | 2026-03-17 00:57:19 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:19.388248 | orchestrator | 2026-03-17 00:57:19 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:19.390817 | orchestrator | 2026-03-17 00:57:19 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:19.392557 | orchestrator | 2026-03-17 00:57:19 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:19.392634 | orchestrator | 2026-03-17 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:22.428829 | orchestrator | 2026-03-17 00:57:22 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:22.428924 | orchestrator | 2026-03-17 00:57:22 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:22.432672 | orchestrator | 2026-03-17 00:57:22 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:22.432742 | orchestrator | 2026-03-17 00:57:22 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:22.432748 | orchestrator | 2026-03-17 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:25.465631 | orchestrator | 2026-03-17 00:57:25 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:25.465737 | orchestrator | 2026-03-17 00:57:25 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:25.466521 | orchestrator | 2026-03-17 00:57:25 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:25.467258 | orchestrator | 2026-03-17 00:57:25 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:25.467307 | orchestrator | 2026-03-17 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:28.506543 | orchestrator | 2026-03-17 00:57:28 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:28.506922 | orchestrator | 2026-03-17 00:57:28 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:28.507460 | orchestrator | 2026-03-17 00:57:28 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:28.507937 | orchestrator | 2026-03-17 00:57:28 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:28.508080 | orchestrator | 2026-03-17 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:31.544459 | orchestrator | 2026-03-17 00:57:31 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:31.545711 | orchestrator | 2026-03-17 00:57:31 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:31.547282 | orchestrator | 2026-03-17 00:57:31 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:31.548765 | orchestrator | 2026-03-17 00:57:31 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:31.548813 | orchestrator | 2026-03-17 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:34.590151 | orchestrator | 2026-03-17 00:57:34 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:34.592722 | orchestrator | 2026-03-17 00:57:34 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:34.594876 | orchestrator | 2026-03-17 00:57:34 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:34.597109 | orchestrator | 2026-03-17 00:57:34 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:34.597170 | orchestrator | 2026-03-17 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:37.630848 | orchestrator | 2026-03-17 00:57:37 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:37.633017 | orchestrator | 2026-03-17 00:57:37 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:37.635497 | orchestrator | 2026-03-17 00:57:37 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:37.637540 | orchestrator | 2026-03-17 00:57:37 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:37.637598 | orchestrator | 2026-03-17 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:40.669528 | orchestrator | 2026-03-17 00:57:40 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:40.671214 | orchestrator | 2026-03-17 00:57:40 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:40.672520 | orchestrator | 2026-03-17 00:57:40 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:40.674052 | orchestrator | 2026-03-17 00:57:40 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:40.674087 | orchestrator | 2026-03-17 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:43.705522 | orchestrator | 2026-03-17 00:57:43 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:43.707621 | orchestrator | 2026-03-17 00:57:43 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:43.707682 | orchestrator | 2026-03-17 00:57:43 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:43.707688 | orchestrator | 2026-03-17 00:57:43 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:43.707693 | orchestrator | 2026-03-17 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:46.739970 | orchestrator | 2026-03-17 00:57:46 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:46.740323 | orchestrator | 2026-03-17 00:57:46 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:46.741002 | orchestrator | 2026-03-17 00:57:46 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state STARTED 2026-03-17 00:57:46.741579 | orchestrator | 2026-03-17 00:57:46 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:46.742195 | orchestrator | 2026-03-17 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:49.769231 | orchestrator | 2026-03-17 00:57:49 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:49.769341 | orchestrator | 2026-03-17 00:57:49 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:49.771665 | orchestrator | 2026-03-17 00:57:49 | INFO  | Task b42465fa-0557-46b6-b877-313804e85db5 is in state SUCCESS 2026-03-17 00:57:49.773333 | orchestrator | 2026-03-17 00:57:49.773385 | orchestrator | 2026-03-17 00:57:49.773396 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-17 00:57:49.773407 | orchestrator | 2026-03-17 00:57:49.773416 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-17 00:57:49.773425 | orchestrator | Tuesday 17 March 2026 00:56:29 +0000 (0:00:00.215) 0:00:00.215 ********* 2026-03-17 00:57:49.773435 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-17 00:57:49.773443 | orchestrator | 2026-03-17 00:57:49.773452 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-17 00:57:49.773460 | orchestrator | Tuesday 17 March 2026 00:56:30 +0000 (0:00:01.159) 0:00:01.374 ********* 2026-03-17 00:57:49.773469 | orchestrator | changed: [testbed-manager] 2026-03-17 00:57:49.773478 | orchestrator | 2026-03-17 00:57:49.773487 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-17 00:57:49.773496 | orchestrator | Tuesday 17 March 2026 00:56:31 +0000 (0:00:01.678) 0:00:03.053 ********* 2026-03-17 00:57:49.773505 | orchestrator | changed: [testbed-manager] 2026-03-17 00:57:49.773514 | orchestrator | 2026-03-17 00:57:49.773522 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:57:49.773531 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:57:49.773540 | orchestrator | 2026-03-17 00:57:49.773549 | orchestrator | 2026-03-17 00:57:49.773557 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:57:49.773566 | orchestrator | Tuesday 17 March 2026 00:56:32 +0000 (0:00:00.568) 0:00:03.622 ********* 2026-03-17 00:57:49.773575 | orchestrator | =============================================================================== 2026-03-17 00:57:49.773602 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.68s 2026-03-17 00:57:49.773611 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.16s 2026-03-17 00:57:49.773619 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.57s 2026-03-17 00:57:49.773628 | orchestrator | 2026-03-17 00:57:49.773636 | orchestrator | 2026-03-17 00:57:49.773646 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-17 00:57:49.773785 | orchestrator | 2026-03-17 00:57:49.773797 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-17 00:57:49.773806 | orchestrator | Tuesday 17 March 2026 00:56:28 +0000 (0:00:00.172) 0:00:00.172 ********* 2026-03-17 00:57:49.773815 | orchestrator | ok: [testbed-manager] 2026-03-17 00:57:49.773825 | orchestrator | 2026-03-17 00:57:49.773834 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-17 00:57:49.773844 | orchestrator | Tuesday 17 March 2026 00:56:29 +0000 (0:00:00.816) 0:00:00.989 ********* 2026-03-17 00:57:49.773853 | orchestrator | ok: [testbed-manager] 2026-03-17 00:57:49.773862 | orchestrator | 2026-03-17 00:57:49.773871 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-17 00:57:49.773880 | orchestrator | Tuesday 17 March 2026 00:56:30 +0000 (0:00:00.513) 0:00:01.503 ********* 2026-03-17 00:57:49.773889 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-17 00:57:49.773898 | orchestrator | 2026-03-17 00:57:49.773907 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-17 00:57:49.773916 | orchestrator | Tuesday 17 March 2026 00:56:31 +0000 (0:00:01.056) 0:00:02.559 ********* 2026-03-17 00:57:49.773925 | orchestrator | changed: [testbed-manager] 2026-03-17 00:57:49.773934 | orchestrator | 2026-03-17 00:57:49.773943 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-17 00:57:49.773951 | orchestrator | Tuesday 17 March 2026 00:56:32 +0000 (0:00:01.198) 0:00:03.757 ********* 2026-03-17 00:57:49.773960 | orchestrator | changed: [testbed-manager] 2026-03-17 00:57:49.773985 | orchestrator | 2026-03-17 00:57:49.773996 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-17 00:57:49.774005 | orchestrator | Tuesday 17 March 2026 00:56:32 +0000 (0:00:00.478) 0:00:04.236 ********* 2026-03-17 00:57:49.774044 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:57:49.774051 | orchestrator | 2026-03-17 00:57:49.774056 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-17 00:57:49.774061 | orchestrator | Tuesday 17 March 2026 00:56:34 +0000 (0:00:01.571) 0:00:05.808 ********* 2026-03-17 00:57:49.774067 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 00:57:49.774072 | orchestrator | 2026-03-17 00:57:49.774077 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-17 00:57:49.774082 | orchestrator | Tuesday 17 March 2026 00:56:35 +0000 (0:00:00.800) 0:00:06.608 ********* 2026-03-17 00:57:49.774087 | orchestrator | ok: [testbed-manager] 2026-03-17 00:57:49.774092 | orchestrator | 2026-03-17 00:57:49.774097 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-17 00:57:49.774103 | orchestrator | Tuesday 17 March 2026 00:56:35 +0000 (0:00:00.363) 0:00:06.972 ********* 2026-03-17 00:57:49.774108 | orchestrator | ok: [testbed-manager] 2026-03-17 00:57:49.774113 | orchestrator | 2026-03-17 00:57:49.774118 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:57:49.774123 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 00:57:49.774129 | orchestrator | 2026-03-17 00:57:49.774134 | orchestrator | 2026-03-17 00:57:49.774139 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:57:49.774144 | orchestrator | Tuesday 17 March 2026 00:56:36 +0000 (0:00:00.732) 0:00:07.704 ********* 2026-03-17 00:57:49.774149 | orchestrator | =============================================================================== 2026-03-17 00:57:49.774161 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.57s 2026-03-17 00:57:49.774166 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.20s 2026-03-17 00:57:49.774172 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.06s 2026-03-17 00:57:49.774185 | orchestrator | Get home directory of operator user ------------------------------------- 0.82s 2026-03-17 00:57:49.774191 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.80s 2026-03-17 00:57:49.774239 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.73s 2026-03-17 00:57:49.774249 | orchestrator | Create .kube directory -------------------------------------------------- 0.51s 2026-03-17 00:57:49.774254 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.48s 2026-03-17 00:57:49.774259 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.36s 2026-03-17 00:57:49.774264 | orchestrator | 2026-03-17 00:57:49.774269 | orchestrator | 2026-03-17 00:57:49.774277 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-17 00:57:49.774286 | orchestrator | 2026-03-17 00:57:49.774294 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-17 00:57:49.774302 | orchestrator | Tuesday 17 March 2026 00:54:42 +0000 (0:00:00.118) 0:00:00.118 ********* 2026-03-17 00:57:49.774310 | orchestrator | ok: [localhost] => { 2026-03-17 00:57:49.774319 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-17 00:57:49.774328 | orchestrator | } 2026-03-17 00:57:49.774337 | orchestrator | 2026-03-17 00:57:49.774346 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-17 00:57:49.774355 | orchestrator | Tuesday 17 March 2026 00:54:42 +0000 (0:00:00.041) 0:00:00.160 ********* 2026-03-17 00:57:49.774365 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-17 00:57:49.774375 | orchestrator | ...ignoring 2026-03-17 00:57:49.774385 | orchestrator | 2026-03-17 00:57:49.774394 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-17 00:57:49.774402 | orchestrator | Tuesday 17 March 2026 00:54:45 +0000 (0:00:03.161) 0:00:03.321 ********* 2026-03-17 00:57:49.774411 | orchestrator | skipping: [localhost] 2026-03-17 00:57:49.774419 | orchestrator | 2026-03-17 00:57:49.774428 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-17 00:57:49.774438 | orchestrator | Tuesday 17 March 2026 00:54:45 +0000 (0:00:00.129) 0:00:03.450 ********* 2026-03-17 00:57:49.774448 | orchestrator | ok: [localhost] 2026-03-17 00:57:49.774457 | orchestrator | 2026-03-17 00:57:49.774466 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:57:49.774478 | orchestrator | 2026-03-17 00:57:49.774484 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:57:49.774490 | orchestrator | Tuesday 17 March 2026 00:54:46 +0000 (0:00:00.288) 0:00:03.739 ********* 2026-03-17 00:57:49.774496 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:49.774502 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:49.774508 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:49.774514 | orchestrator | 2026-03-17 00:57:49.774520 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:57:49.774526 | orchestrator | Tuesday 17 March 2026 00:54:46 +0000 (0:00:00.377) 0:00:04.116 ********* 2026-03-17 00:57:49.774532 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-17 00:57:49.774538 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-17 00:57:49.774543 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-17 00:57:49.774549 | orchestrator | 2026-03-17 00:57:49.774555 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-17 00:57:49.774562 | orchestrator | 2026-03-17 00:57:49.774568 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-17 00:57:49.774578 | orchestrator | Tuesday 17 March 2026 00:54:47 +0000 (0:00:00.956) 0:00:05.073 ********* 2026-03-17 00:57:49.774585 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:49.774590 | orchestrator | 2026-03-17 00:57:49.774597 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-17 00:57:49.774603 | orchestrator | Tuesday 17 March 2026 00:54:49 +0000 (0:00:01.452) 0:00:06.526 ********* 2026-03-17 00:57:49.774609 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:49.774615 | orchestrator | 2026-03-17 00:57:49.774621 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-17 00:57:49.774626 | orchestrator | Tuesday 17 March 2026 00:54:51 +0000 (0:00:02.198) 0:00:08.724 ********* 2026-03-17 00:57:49.774631 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.774636 | orchestrator | 2026-03-17 00:57:49.774642 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-17 00:57:49.774647 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:00.857) 0:00:09.582 ********* 2026-03-17 00:57:49.774652 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.774657 | orchestrator | 2026-03-17 00:57:49.774662 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-17 00:57:49.774667 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:00.305) 0:00:09.888 ********* 2026-03-17 00:57:49.774672 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.774677 | orchestrator | 2026-03-17 00:57:49.774682 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-17 00:57:49.774687 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:00.233) 0:00:10.121 ********* 2026-03-17 00:57:49.774692 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.774698 | orchestrator | 2026-03-17 00:57:49.774703 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-17 00:57:49.774708 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:00.260) 0:00:10.382 ********* 2026-03-17 00:57:49.774713 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:49.774718 | orchestrator | 2026-03-17 00:57:49.774723 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-17 00:57:49.774736 | orchestrator | Tuesday 17 March 2026 00:54:53 +0000 (0:00:00.651) 0:00:11.033 ********* 2026-03-17 00:57:49.774741 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:49.774747 | orchestrator | 2026-03-17 00:57:49.774752 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-17 00:57:49.774757 | orchestrator | Tuesday 17 March 2026 00:54:54 +0000 (0:00:00.892) 0:00:11.926 ********* 2026-03-17 00:57:49.774762 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.774767 | orchestrator | 2026-03-17 00:57:49.774773 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-17 00:57:49.774778 | orchestrator | Tuesday 17 March 2026 00:54:54 +0000 (0:00:00.525) 0:00:12.451 ********* 2026-03-17 00:57:49.774783 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.774788 | orchestrator | 2026-03-17 00:57:49.774793 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-17 00:57:49.774798 | orchestrator | Tuesday 17 March 2026 00:54:55 +0000 (0:00:00.243) 0:00:12.695 ********* 2026-03-17 00:57:49.774806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.774821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.774827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.774833 | orchestrator | 2026-03-17 00:57:49.774838 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-17 00:57:49.774844 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:01.010) 0:00:13.705 ********* 2026-03-17 00:57:49.774854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.774862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.774872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.774878 | orchestrator | 2026-03-17 00:57:49.774883 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-17 00:57:49.774888 | orchestrator | Tuesday 17 March 2026 00:54:58 +0000 (0:00:02.122) 0:00:15.827 ********* 2026-03-17 00:57:49.774894 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-17 00:57:49.774899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-17 00:57:49.774904 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-17 00:57:49.774910 | orchestrator | 2026-03-17 00:57:49.774915 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-17 00:57:49.774920 | orchestrator | Tuesday 17 March 2026 00:54:59 +0000 (0:00:01.594) 0:00:17.422 ********* 2026-03-17 00:57:49.774925 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-17 00:57:49.774930 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-17 00:57:49.774936 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-17 00:57:49.774941 | orchestrator | 2026-03-17 00:57:49.774946 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-17 00:57:49.774954 | orchestrator | Tuesday 17 March 2026 00:55:03 +0000 (0:00:03.276) 0:00:20.698 ********* 2026-03-17 00:57:49.774960 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-17 00:57:49.774965 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-17 00:57:49.774988 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-17 00:57:49.774995 | orchestrator | 2026-03-17 00:57:49.775000 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-17 00:57:49.775005 | orchestrator | Tuesday 17 March 2026 00:55:04 +0000 (0:00:01.178) 0:00:21.877 ********* 2026-03-17 00:57:49.775014 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-17 00:57:49.775019 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-17 00:57:49.775024 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-17 00:57:49.775030 | orchestrator | 2026-03-17 00:57:49.775035 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-17 00:57:49.775040 | orchestrator | Tuesday 17 March 2026 00:55:06 +0000 (0:00:01.943) 0:00:23.821 ********* 2026-03-17 00:57:49.775045 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-17 00:57:49.775050 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-17 00:57:49.775056 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-17 00:57:49.775061 | orchestrator | 2026-03-17 00:57:49.775066 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-17 00:57:49.775071 | orchestrator | Tuesday 17 March 2026 00:55:07 +0000 (0:00:01.338) 0:00:25.159 ********* 2026-03-17 00:57:49.775076 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-17 00:57:49.775082 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-17 00:57:49.775087 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-17 00:57:49.775092 | orchestrator | 2026-03-17 00:57:49.775100 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-17 00:57:49.775106 | orchestrator | Tuesday 17 March 2026 00:55:09 +0000 (0:00:01.673) 0:00:26.832 ********* 2026-03-17 00:57:49.775111 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:49.775116 | orchestrator | 2026-03-17 00:57:49.775121 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-17 00:57:49.775127 | orchestrator | Tuesday 17 March 2026 00:55:09 +0000 (0:00:00.523) 0:00:27.356 ********* 2026-03-17 00:57:49.775132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.775142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.775151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.775157 | orchestrator | 2026-03-17 00:57:49.775162 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-17 00:57:49.775168 | orchestrator | Tuesday 17 March 2026 00:55:11 +0000 (0:00:01.784) 0:00:29.140 ********* 2026-03-17 00:57:49.775175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:49.775181 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.775187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:49.775192 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:49.775205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:49.775211 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:49.775216 | orchestrator | 2026-03-17 00:57:49.775221 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-17 00:57:49.775226 | orchestrator | Tuesday 17 March 2026 00:55:12 +0000 (0:00:00.366) 0:00:29.507 ********* 2026-03-17 00:57:49.775234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:49.775240 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.775245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:49.775251 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:49.775256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:49.775265 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:49.775271 | orchestrator | 2026-03-17 00:57:49.775280 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-17 00:57:49.775293 | orchestrator | Tuesday 17 March 2026 00:55:13 +0000 (0:00:01.330) 0:00:30.838 ********* 2026-03-17 00:57:49.775303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.775320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.775331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 00:57:49.775347 | orchestrator | 2026-03-17 00:57:49.775357 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-17 00:57:49.775367 | orchestrator | Tuesday 17 March 2026 00:55:14 +0000 (0:00:00.987) 0:00:31.825 ********* 2026-03-17 00:57:49.775373 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 00:57:49.775378 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:57:49.775383 | orchestrator | } 2026-03-17 00:57:49.775388 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:57:49.775394 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:57:49.775399 | orchestrator | } 2026-03-17 00:57:49.775406 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:57:49.775415 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:57:49.775424 | orchestrator | } 2026-03-17 00:57:49.775434 | orchestrator | 2026-03-17 00:57:49.775442 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 00:57:49.775451 | orchestrator | Tuesday 17 March 2026 00:55:14 +0000 (0:00:00.320) 0:00:32.146 ********* 2026-03-17 00:57:49.775467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:49.775477 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.775486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:49.775492 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:49.775497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 00:57:49.775507 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:49.775512 | orchestrator | 2026-03-17 00:57:49.775517 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-17 00:57:49.775522 | orchestrator | Tuesday 17 March 2026 00:55:15 +0000 (0:00:01.277) 0:00:33.423 ********* 2026-03-17 00:57:49.775527 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:49.775533 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:49.775538 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:49.775543 | orchestrator | 2026-03-17 00:57:49.775548 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-17 00:57:49.775553 | orchestrator | Tuesday 17 March 2026 00:55:16 +0000 (0:00:00.808) 0:00:34.232 ********* 2026-03-17 00:57:49.775558 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:49.775564 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:49.775569 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:49.775574 | orchestrator | 2026-03-17 00:57:49.775579 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-17 00:57:49.775585 | orchestrator | Tuesday 17 March 2026 00:55:24 +0000 (0:00:07.908) 0:00:42.141 ********* 2026-03-17 00:57:49.775590 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:49.775595 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:49.775600 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:49.775606 | orchestrator | 2026-03-17 00:57:49.775611 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-17 00:57:49.775616 | orchestrator | 2026-03-17 00:57:49.775621 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-17 00:57:49.775629 | orchestrator | Tuesday 17 March 2026 00:55:24 +0000 (0:00:00.320) 0:00:42.461 ********* 2026-03-17 00:57:49.775636 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:49.775645 | orchestrator | 2026-03-17 00:57:49.775654 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-17 00:57:49.775663 | orchestrator | Tuesday 17 March 2026 00:55:25 +0000 (0:00:00.567) 0:00:43.029 ********* 2026-03-17 00:57:49.775671 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:57:49.775680 | orchestrator | 2026-03-17 00:57:49.775687 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-17 00:57:49.775696 | orchestrator | Tuesday 17 March 2026 00:55:25 +0000 (0:00:00.096) 0:00:43.126 ********* 2026-03-17 00:57:49.775704 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:49.775713 | orchestrator | 2026-03-17 00:57:49.775723 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-17 00:57:49.775732 | orchestrator | Tuesday 17 March 2026 00:55:27 +0000 (0:00:01.478) 0:00:44.604 ********* 2026-03-17 00:57:49.775742 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:57:49.775751 | orchestrator | 2026-03-17 00:57:49.775760 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-17 00:57:49.775768 | orchestrator | 2026-03-17 00:57:49.775778 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-17 00:57:49.775787 | orchestrator | Tuesday 17 March 2026 00:57:19 +0000 (0:01:52.171) 0:02:36.775 ********* 2026-03-17 00:57:49.775795 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:49.775805 | orchestrator | 2026-03-17 00:57:49.775814 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-17 00:57:49.775824 | orchestrator | Tuesday 17 March 2026 00:57:19 +0000 (0:00:00.659) 0:02:37.435 ********* 2026-03-17 00:57:49.775832 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:57:49.775847 | orchestrator | 2026-03-17 00:57:49.775857 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-17 00:57:49.775865 | orchestrator | Tuesday 17 March 2026 00:57:20 +0000 (0:00:00.097) 0:02:37.533 ********* 2026-03-17 00:57:49.775873 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:49.775882 | orchestrator | 2026-03-17 00:57:49.775890 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-17 00:57:49.775898 | orchestrator | Tuesday 17 March 2026 00:57:26 +0000 (0:00:06.413) 0:02:43.946 ********* 2026-03-17 00:57:49.775907 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:57:49.775916 | orchestrator | 2026-03-17 00:57:49.775924 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-17 00:57:49.775931 | orchestrator | 2026-03-17 00:57:49.775939 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-17 00:57:49.775951 | orchestrator | Tuesday 17 March 2026 00:57:31 +0000 (0:00:05.433) 0:02:49.380 ********* 2026-03-17 00:57:49.775960 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:49.775969 | orchestrator | 2026-03-17 00:57:49.775998 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-17 00:57:49.776009 | orchestrator | Tuesday 17 March 2026 00:57:32 +0000 (0:00:00.574) 0:02:49.954 ********* 2026-03-17 00:57:49.776017 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:57:49.776025 | orchestrator | 2026-03-17 00:57:49.776033 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-17 00:57:49.776041 | orchestrator | Tuesday 17 March 2026 00:57:32 +0000 (0:00:00.100) 0:02:50.055 ********* 2026-03-17 00:57:49.776049 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:49.776056 | orchestrator | 2026-03-17 00:57:49.776065 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-17 00:57:49.776073 | orchestrator | Tuesday 17 March 2026 00:57:34 +0000 (0:00:01.421) 0:02:51.476 ********* 2026-03-17 00:57:49.776082 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:57:49.776090 | orchestrator | 2026-03-17 00:57:49.776099 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-17 00:57:49.776107 | orchestrator | 2026-03-17 00:57:49.776116 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-17 00:57:49.776124 | orchestrator | Tuesday 17 March 2026 00:57:41 +0000 (0:00:07.976) 0:02:59.453 ********* 2026-03-17 00:57:49.776133 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:57:49.776142 | orchestrator | 2026-03-17 00:57:49.776151 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-17 00:57:49.776160 | orchestrator | Tuesday 17 March 2026 00:57:42 +0000 (0:00:00.916) 0:03:00.369 ********* 2026-03-17 00:57:49.776169 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:57:49.776177 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:57:49.776186 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:57:49.776195 | orchestrator | 2026-03-17 00:57:49.776205 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:57:49.776214 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 00:57:49.776224 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-03-17 00:57:49.776234 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:57:49.776242 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 00:57:49.776252 | orchestrator | 2026-03-17 00:57:49.776258 | orchestrator | 2026-03-17 00:57:49.776263 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:57:49.776269 | orchestrator | Tuesday 17 March 2026 00:57:46 +0000 (0:00:03.641) 0:03:04.011 ********* 2026-03-17 00:57:49.776280 | orchestrator | =============================================================================== 2026-03-17 00:57:49.776286 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 125.58s 2026-03-17 00:57:49.776298 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.31s 2026-03-17 00:57:49.776304 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.91s 2026-03-17 00:57:49.776309 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.64s 2026-03-17 00:57:49.776314 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.27s 2026-03-17 00:57:49.776320 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.16s 2026-03-17 00:57:49.776325 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.20s 2026-03-17 00:57:49.776330 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.12s 2026-03-17 00:57:49.776336 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.94s 2026-03-17 00:57:49.776341 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.80s 2026-03-17 00:57:49.776346 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.78s 2026-03-17 00:57:49.776351 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.67s 2026-03-17 00:57:49.776356 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.60s 2026-03-17 00:57:49.776362 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.45s 2026-03-17 00:57:49.776367 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.34s 2026-03-17 00:57:49.776372 | orchestrator | service-cert-copy : rabbitmq | Copying over backend internal TLS key ---- 1.33s 2026-03-17 00:57:49.776377 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.28s 2026-03-17 00:57:49.776382 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.18s 2026-03-17 00:57:49.776387 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.01s 2026-03-17 00:57:49.776393 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 0.99s 2026-03-17 00:57:49.776398 | orchestrator | 2026-03-17 00:57:49 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:49.776403 | orchestrator | 2026-03-17 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:52.808960 | orchestrator | 2026-03-17 00:57:52 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:52.810911 | orchestrator | 2026-03-17 00:57:52 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:52.812918 | orchestrator | 2026-03-17 00:57:52 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:52.813013 | orchestrator | 2026-03-17 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:55.842623 | orchestrator | 2026-03-17 00:57:55 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:55.842671 | orchestrator | 2026-03-17 00:57:55 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:55.843766 | orchestrator | 2026-03-17 00:57:55 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:55.843800 | orchestrator | 2026-03-17 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:57:58.879605 | orchestrator | 2026-03-17 00:57:58 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:57:58.881694 | orchestrator | 2026-03-17 00:57:58 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:57:58.883810 | orchestrator | 2026-03-17 00:57:58 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:57:58.883880 | orchestrator | 2026-03-17 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:01.907443 | orchestrator | 2026-03-17 00:58:01 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:01.908134 | orchestrator | 2026-03-17 00:58:01 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:01.908990 | orchestrator | 2026-03-17 00:58:01 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:01.909025 | orchestrator | 2026-03-17 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:04.950185 | orchestrator | 2026-03-17 00:58:04 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:04.954683 | orchestrator | 2026-03-17 00:58:04 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:04.956481 | orchestrator | 2026-03-17 00:58:04 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:04.956787 | orchestrator | 2026-03-17 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:07.993767 | orchestrator | 2026-03-17 00:58:07 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:07.993857 | orchestrator | 2026-03-17 00:58:07 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:07.995522 | orchestrator | 2026-03-17 00:58:07 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:07.995578 | orchestrator | 2026-03-17 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:11.029175 | orchestrator | 2026-03-17 00:58:11 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:11.029262 | orchestrator | 2026-03-17 00:58:11 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:11.029270 | orchestrator | 2026-03-17 00:58:11 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:11.029275 | orchestrator | 2026-03-17 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:14.066417 | orchestrator | 2026-03-17 00:58:14 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:14.068388 | orchestrator | 2026-03-17 00:58:14 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:14.071293 | orchestrator | 2026-03-17 00:58:14 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:14.071349 | orchestrator | 2026-03-17 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:17.121590 | orchestrator | 2026-03-17 00:58:17 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:17.121772 | orchestrator | 2026-03-17 00:58:17 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:17.124110 | orchestrator | 2026-03-17 00:58:17 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:17.124166 | orchestrator | 2026-03-17 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:20.168193 | orchestrator | 2026-03-17 00:58:20 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:20.169528 | orchestrator | 2026-03-17 00:58:20 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:20.169791 | orchestrator | 2026-03-17 00:58:20 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:20.169819 | orchestrator | 2026-03-17 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:23.211787 | orchestrator | 2026-03-17 00:58:23 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:23.215432 | orchestrator | 2026-03-17 00:58:23 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:23.216552 | orchestrator | 2026-03-17 00:58:23 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:23.216586 | orchestrator | 2026-03-17 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:26.257769 | orchestrator | 2026-03-17 00:58:26 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:26.258709 | orchestrator | 2026-03-17 00:58:26 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:26.261636 | orchestrator | 2026-03-17 00:58:26 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:26.261680 | orchestrator | 2026-03-17 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:29.310657 | orchestrator | 2026-03-17 00:58:29 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:29.311974 | orchestrator | 2026-03-17 00:58:29 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:29.312824 | orchestrator | 2026-03-17 00:58:29 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:29.312858 | orchestrator | 2026-03-17 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:32.347417 | orchestrator | 2026-03-17 00:58:32 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:32.347459 | orchestrator | 2026-03-17 00:58:32 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:32.348050 | orchestrator | 2026-03-17 00:58:32 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:32.348073 | orchestrator | 2026-03-17 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:35.375296 | orchestrator | 2026-03-17 00:58:35 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:35.375705 | orchestrator | 2026-03-17 00:58:35 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:35.376449 | orchestrator | 2026-03-17 00:58:35 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:35.376483 | orchestrator | 2026-03-17 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:38.415176 | orchestrator | 2026-03-17 00:58:38 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:38.418372 | orchestrator | 2026-03-17 00:58:38 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:38.421149 | orchestrator | 2026-03-17 00:58:38 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:38.421201 | orchestrator | 2026-03-17 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:41.459402 | orchestrator | 2026-03-17 00:58:41 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:41.461012 | orchestrator | 2026-03-17 00:58:41 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:41.462539 | orchestrator | 2026-03-17 00:58:41 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:41.462564 | orchestrator | 2026-03-17 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:44.501340 | orchestrator | 2026-03-17 00:58:44 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:44.503817 | orchestrator | 2026-03-17 00:58:44 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:44.505737 | orchestrator | 2026-03-17 00:58:44 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:44.505784 | orchestrator | 2026-03-17 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:47.535256 | orchestrator | 2026-03-17 00:58:47 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:47.536379 | orchestrator | 2026-03-17 00:58:47 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state STARTED 2026-03-17 00:58:47.539123 | orchestrator | 2026-03-17 00:58:47 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:47.539187 | orchestrator | 2026-03-17 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:50.575061 | orchestrator | 2026-03-17 00:58:50 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:50.578590 | orchestrator | 2026-03-17 00:58:50 | INFO  | Task bb3064fc-dd5d-4b19-8d5e-2045805e1849 is in state SUCCESS 2026-03-17 00:58:50.579644 | orchestrator | 2026-03-17 00:58:50.579705 | orchestrator | 2026-03-17 00:58:50.579715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 00:58:50.579724 | orchestrator | 2026-03-17 00:58:50.579731 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 00:58:50.579737 | orchestrator | Tuesday 17 March 2026 00:55:30 +0000 (0:00:00.458) 0:00:00.458 ********* 2026-03-17 00:58:50.579744 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.579750 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.579757 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.579763 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:50.579769 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:50.579775 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:50.579781 | orchestrator | 2026-03-17 00:58:50.579787 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 00:58:50.579794 | orchestrator | Tuesday 17 March 2026 00:55:31 +0000 (0:00:00.793) 0:00:01.251 ********* 2026-03-17 00:58:50.579800 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-17 00:58:50.579807 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-17 00:58:50.579814 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-17 00:58:50.579820 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-17 00:58:50.579827 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-17 00:58:50.579833 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-17 00:58:50.579839 | orchestrator | 2026-03-17 00:58:50.579846 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-17 00:58:50.579852 | orchestrator | 2026-03-17 00:58:50.579859 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-17 00:58:50.579866 | orchestrator | Tuesday 17 March 2026 00:55:32 +0000 (0:00:01.023) 0:00:02.275 ********* 2026-03-17 00:58:50.579873 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 00:58:50.579878 | orchestrator | 2026-03-17 00:58:50.579882 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-17 00:58:50.579887 | orchestrator | Tuesday 17 March 2026 00:55:33 +0000 (0:00:00.948) 0:00:03.224 ********* 2026-03-17 00:58:50.579893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.579917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.579921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.579947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.579969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.579976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.579982 | orchestrator | 2026-03-17 00:58:50.580002 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-17 00:58:50.580008 | orchestrator | Tuesday 17 March 2026 00:55:35 +0000 (0:00:01.563) 0:00:04.787 ********* 2026-03-17 00:58:50.580015 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580438 | orchestrator | 2026-03-17 00:58:50.580445 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-17 00:58:50.580453 | orchestrator | Tuesday 17 March 2026 00:55:36 +0000 (0:00:01.448) 0:00:06.235 ********* 2026-03-17 00:58:50.580461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580512 | orchestrator | 2026-03-17 00:58:50.580517 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-17 00:58:50.580521 | orchestrator | Tuesday 17 March 2026 00:55:37 +0000 (0:00:01.240) 0:00:07.475 ********* 2026-03-17 00:58:50.580526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580539 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580556 | orchestrator | 2026-03-17 00:58:50.580564 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-17 00:58:50.580568 | orchestrator | Tuesday 17 March 2026 00:55:39 +0000 (0:00:01.564) 0:00:09.039 ********* 2026-03-17 00:58:50.580573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580599 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.580603 | orchestrator | 2026-03-17 00:58:50.580608 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-17 00:58:50.580613 | orchestrator | Tuesday 17 March 2026 00:55:41 +0000 (0:00:02.202) 0:00:11.242 ********* 2026-03-17 00:58:50.580618 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 00:58:50.580623 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.580628 | orchestrator | } 2026-03-17 00:58:50.580632 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:58:50.580637 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.580641 | orchestrator | } 2026-03-17 00:58:50.580646 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:58:50.580649 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.580653 | orchestrator | } 2026-03-17 00:58:50.580657 | orchestrator | changed: [testbed-node-3] => { 2026-03-17 00:58:50.580661 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.580664 | orchestrator | } 2026-03-17 00:58:50.580668 | orchestrator | changed: [testbed-node-4] => { 2026-03-17 00:58:50.580673 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.580679 | orchestrator | } 2026-03-17 00:58:50.580685 | orchestrator | changed: [testbed-node-5] => { 2026-03-17 00:58:50.580691 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.580697 | orchestrator | } 2026-03-17 00:58:50.580703 | orchestrator | 2026-03-17 00:58:50.580709 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 00:58:50.580716 | orchestrator | Tuesday 17 March 2026 00:55:42 +0000 (0:00:00.988) 0:00:12.231 ********* 2026-03-17 00:58:50.581183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.581221 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.581251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.581256 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.581260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.581264 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.581268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.581272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.581276 | orchestrator | skipping: [testbed-node-3] 2026-03-17 00:58:50.581280 | orchestrator | skipping: [testbed-node-4] 2026-03-17 00:58:50.581283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.581287 | orchestrator | skipping: [testbed-node-5] 2026-03-17 00:58:50.581291 | orchestrator | 2026-03-17 00:58:50.581295 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-17 00:58:50.581299 | orchestrator | Tuesday 17 March 2026 00:55:43 +0000 (0:00:01.312) 0:00:13.543 ********* 2026-03-17 00:58:50.581303 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.581307 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.581310 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.581314 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:50.581318 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:50.581322 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:50.581325 | orchestrator | 2026-03-17 00:58:50.581329 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-17 00:58:50.581333 | orchestrator | Tuesday 17 March 2026 00:55:46 +0000 (0:00:02.942) 0:00:16.485 ********* 2026-03-17 00:58:50.581337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-17 00:58:50.581342 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-17 00:58:50.581348 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-17 00:58:50.581359 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-17 00:58:50.581365 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-17 00:58:50.581375 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-17 00:58:50.581382 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:58:50.581388 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:58:50.581394 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:58:50.581400 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:58:50.581405 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:58:50.581411 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-17 00:58:50.581435 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-17 00:58:50.581444 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-17 00:58:50.581450 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-17 00:58:50.581457 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-17 00:58:50.581464 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-17 00:58:50.581470 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-17 00:58:50.581476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:58:50.581482 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:58:50.581488 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:58:50.581493 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:58:50.581499 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:58:50.581505 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-17 00:58:50.581510 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:58:50.581516 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:58:50.581522 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:58:50.581528 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:58:50.581533 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:58:50.581540 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-17 00:58:50.581546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:58:50.581552 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:58:50.581564 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:58:50.581570 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:58:50.581576 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:58:50.581582 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-17 00:58:50.581588 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-17 00:58:50.581595 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-17 00:58:50.581601 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-17 00:58:50.581607 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-17 00:58:50.581612 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-17 00:58:50.581618 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-17 00:58:50.581636 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-17 00:58:50.581645 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-17 00:58:50.581652 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-17 00:58:50.581658 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-17 00:58:50.581664 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-17 00:58:50.581690 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-17 00:58:50.581697 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-17 00:58:50.581702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-17 00:58:50.581708 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-17 00:58:50.581715 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-17 00:58:50.581720 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-17 00:58:50.581727 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-17 00:58:50.581733 | orchestrator | 2026-03-17 00:58:50.581740 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:58:50.581747 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:18.430) 0:00:34.915 ********* 2026-03-17 00:58:50.581753 | orchestrator | 2026-03-17 00:58:50.581759 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:58:50.581766 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:00.227) 0:00:35.143 ********* 2026-03-17 00:58:50.581772 | orchestrator | 2026-03-17 00:58:50.581779 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:58:50.581785 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:00.074) 0:00:35.217 ********* 2026-03-17 00:58:50.581798 | orchestrator | 2026-03-17 00:58:50.581805 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:58:50.581814 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:00.088) 0:00:35.306 ********* 2026-03-17 00:58:50.581821 | orchestrator | 2026-03-17 00:58:50.581826 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:58:50.581832 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:00.069) 0:00:35.375 ********* 2026-03-17 00:58:50.581838 | orchestrator | 2026-03-17 00:58:50.581844 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-17 00:58:50.581850 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:00.064) 0:00:35.440 ********* 2026-03-17 00:58:50.581856 | orchestrator | 2026-03-17 00:58:50.581862 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-17 00:58:50.581868 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:00.080) 0:00:35.521 ********* 2026-03-17 00:58:50.581874 | orchestrator | ok: [testbed-node-3] 2026-03-17 00:58:50.581881 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.581887 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.581893 | orchestrator | ok: [testbed-node-5] 2026-03-17 00:58:50.581899 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.581905 | orchestrator | ok: [testbed-node-4] 2026-03-17 00:58:50.581911 | orchestrator | 2026-03-17 00:58:50.581917 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-17 00:58:50.581948 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:01.762) 0:00:37.283 ********* 2026-03-17 00:58:50.581955 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.581961 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.581967 | orchestrator | changed: [testbed-node-3] 2026-03-17 00:58:50.581974 | orchestrator | changed: [testbed-node-5] 2026-03-17 00:58:50.581980 | orchestrator | changed: [testbed-node-4] 2026-03-17 00:58:50.581986 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.581992 | orchestrator | 2026-03-17 00:58:50.581997 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-17 00:58:50.582003 | orchestrator | 2026-03-17 00:58:50.582008 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-17 00:58:50.582062 | orchestrator | Tuesday 17 March 2026 00:56:15 +0000 (0:00:07.932) 0:00:45.216 ********* 2026-03-17 00:58:50.582070 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:50.582076 | orchestrator | 2026-03-17 00:58:50.582082 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-17 00:58:50.582088 | orchestrator | Tuesday 17 March 2026 00:56:17 +0000 (0:00:02.082) 0:00:47.298 ********* 2026-03-17 00:58:50.582094 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:50.582100 | orchestrator | 2026-03-17 00:58:50.582106 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-17 00:58:50.582113 | orchestrator | Tuesday 17 March 2026 00:56:18 +0000 (0:00:00.726) 0:00:48.024 ********* 2026-03-17 00:58:50.582126 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.582132 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.582138 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.582145 | orchestrator | 2026-03-17 00:58:50.582152 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-17 00:58:50.582158 | orchestrator | Tuesday 17 March 2026 00:56:19 +0000 (0:00:01.014) 0:00:49.039 ********* 2026-03-17 00:58:50.582165 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.582171 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.582177 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.582184 | orchestrator | 2026-03-17 00:58:50.582191 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-17 00:58:50.582197 | orchestrator | Tuesday 17 March 2026 00:56:19 +0000 (0:00:00.270) 0:00:49.310 ********* 2026-03-17 00:58:50.582203 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.582215 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.582221 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.582227 | orchestrator | 2026-03-17 00:58:50.582232 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-17 00:58:50.582274 | orchestrator | Tuesday 17 March 2026 00:56:19 +0000 (0:00:00.264) 0:00:49.574 ********* 2026-03-17 00:58:50.582282 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.582287 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.582293 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.582298 | orchestrator | 2026-03-17 00:58:50.582305 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-17 00:58:50.582311 | orchestrator | Tuesday 17 March 2026 00:56:20 +0000 (0:00:00.223) 0:00:49.797 ********* 2026-03-17 00:58:50.582317 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.582323 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.582329 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.582335 | orchestrator | 2026-03-17 00:58:50.582341 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-17 00:58:50.582346 | orchestrator | Tuesday 17 March 2026 00:56:20 +0000 (0:00:00.386) 0:00:50.184 ********* 2026-03-17 00:58:50.582353 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582358 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582364 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582370 | orchestrator | 2026-03-17 00:58:50.582375 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-17 00:58:50.582381 | orchestrator | Tuesday 17 March 2026 00:56:20 +0000 (0:00:00.270) 0:00:50.455 ********* 2026-03-17 00:58:50.582387 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582393 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582398 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582404 | orchestrator | 2026-03-17 00:58:50.582409 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-17 00:58:50.582415 | orchestrator | Tuesday 17 March 2026 00:56:20 +0000 (0:00:00.196) 0:00:50.651 ********* 2026-03-17 00:58:50.582421 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582426 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582432 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582438 | orchestrator | 2026-03-17 00:58:50.582443 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-17 00:58:50.582449 | orchestrator | Tuesday 17 March 2026 00:56:21 +0000 (0:00:00.357) 0:00:51.008 ********* 2026-03-17 00:58:50.582453 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582457 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582461 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582465 | orchestrator | 2026-03-17 00:58:50.582469 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-17 00:58:50.582473 | orchestrator | Tuesday 17 March 2026 00:56:21 +0000 (0:00:00.345) 0:00:51.354 ********* 2026-03-17 00:58:50.582476 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582480 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582484 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582487 | orchestrator | 2026-03-17 00:58:50.582492 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-17 00:58:50.582496 | orchestrator | Tuesday 17 March 2026 00:56:22 +0000 (0:00:00.518) 0:00:51.872 ********* 2026-03-17 00:58:50.582500 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582503 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582507 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582511 | orchestrator | 2026-03-17 00:58:50.582515 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-17 00:58:50.582519 | orchestrator | Tuesday 17 March 2026 00:56:22 +0000 (0:00:00.584) 0:00:52.457 ********* 2026-03-17 00:58:50.582523 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582526 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582536 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582540 | orchestrator | 2026-03-17 00:58:50.582544 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-17 00:58:50.582547 | orchestrator | Tuesday 17 March 2026 00:56:23 +0000 (0:00:00.398) 0:00:52.855 ********* 2026-03-17 00:58:50.582551 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582555 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582559 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582562 | orchestrator | 2026-03-17 00:58:50.582566 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-17 00:58:50.582570 | orchestrator | Tuesday 17 March 2026 00:56:23 +0000 (0:00:00.442) 0:00:53.297 ********* 2026-03-17 00:58:50.582574 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582578 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582582 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582585 | orchestrator | 2026-03-17 00:58:50.582589 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-17 00:58:50.582593 | orchestrator | Tuesday 17 March 2026 00:56:24 +0000 (0:00:00.578) 0:00:53.876 ********* 2026-03-17 00:58:50.582597 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582600 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582604 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582608 | orchestrator | 2026-03-17 00:58:50.582612 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-17 00:58:50.582616 | orchestrator | Tuesday 17 March 2026 00:56:24 +0000 (0:00:00.449) 0:00:54.325 ********* 2026-03-17 00:58:50.582619 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582628 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582632 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582647 | orchestrator | 2026-03-17 00:58:50.582651 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-17 00:58:50.582654 | orchestrator | Tuesday 17 March 2026 00:56:24 +0000 (0:00:00.332) 0:00:54.658 ********* 2026-03-17 00:58:50.582658 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582662 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582666 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582669 | orchestrator | 2026-03-17 00:58:50.582673 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-17 00:58:50.582677 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.321) 0:00:54.979 ********* 2026-03-17 00:58:50.582681 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 00:58:50.582685 | orchestrator | 2026-03-17 00:58:50.582694 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-17 00:58:50.582699 | orchestrator | Tuesday 17 March 2026 00:56:25 +0000 (0:00:00.733) 0:00:55.713 ********* 2026-03-17 00:58:50.582702 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.582706 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.582710 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.582714 | orchestrator | 2026-03-17 00:58:50.582717 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-17 00:58:50.582721 | orchestrator | Tuesday 17 March 2026 00:56:26 +0000 (0:00:00.519) 0:00:56.233 ********* 2026-03-17 00:58:50.582725 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.582729 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.582733 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.582736 | orchestrator | 2026-03-17 00:58:50.582740 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-17 00:58:50.582744 | orchestrator | Tuesday 17 March 2026 00:56:26 +0000 (0:00:00.521) 0:00:56.754 ********* 2026-03-17 00:58:50.582748 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582752 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582756 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582759 | orchestrator | 2026-03-17 00:58:50.582763 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-17 00:58:50.582774 | orchestrator | Tuesday 17 March 2026 00:56:27 +0000 (0:00:00.404) 0:00:57.159 ********* 2026-03-17 00:58:50.582778 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582782 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582785 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582789 | orchestrator | 2026-03-17 00:58:50.582793 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-17 00:58:50.582797 | orchestrator | Tuesday 17 March 2026 00:56:27 +0000 (0:00:00.261) 0:00:57.420 ********* 2026-03-17 00:58:50.582801 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582805 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582808 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582812 | orchestrator | 2026-03-17 00:58:50.582816 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-17 00:58:50.582820 | orchestrator | Tuesday 17 March 2026 00:56:28 +0000 (0:00:00.585) 0:00:58.006 ********* 2026-03-17 00:58:50.582823 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582827 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582831 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582835 | orchestrator | 2026-03-17 00:58:50.582839 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-17 00:58:50.582842 | orchestrator | Tuesday 17 March 2026 00:56:28 +0000 (0:00:00.708) 0:00:58.715 ********* 2026-03-17 00:58:50.582846 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582850 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582854 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582858 | orchestrator | 2026-03-17 00:58:50.582861 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-17 00:58:50.582865 | orchestrator | Tuesday 17 March 2026 00:56:29 +0000 (0:00:00.691) 0:00:59.406 ********* 2026-03-17 00:58:50.582869 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.582873 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.582877 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.582880 | orchestrator | 2026-03-17 00:58:50.582884 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-17 00:58:50.582888 | orchestrator | Tuesday 17 March 2026 00:56:30 +0000 (0:00:00.409) 0:00:59.816 ********* 2026-03-17 00:58:50.582895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.582903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.582910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.582969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.582977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.582981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.582985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.582990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.582995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.582999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583025 | orchestrator | 2026-03-17 00:58:50.583029 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-17 00:58:50.583033 | orchestrator | Tuesday 17 March 2026 00:56:34 +0000 (0:00:04.031) 0:01:03.848 ********* 2026-03-17 00:58:50.583037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583098 | orchestrator | 2026-03-17 00:58:50.583102 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-17 00:58:50.583106 | orchestrator | Tuesday 17 March 2026 00:56:39 +0000 (0:00:05.422) 0:01:09.271 ********* 2026-03-17 00:58:50.583110 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-17 00:58:50.583114 | orchestrator | 2026-03-17 00:58:50.583118 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-17 00:58:50.583122 | orchestrator | Tuesday 17 March 2026 00:56:40 +0000 (0:00:00.577) 0:01:09.848 ********* 2026-03-17 00:58:50.583126 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.583131 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.583137 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.583148 | orchestrator | 2026-03-17 00:58:50.583157 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-17 00:58:50.583164 | orchestrator | Tuesday 17 March 2026 00:56:40 +0000 (0:00:00.684) 0:01:10.533 ********* 2026-03-17 00:58:50.583169 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.583175 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.583181 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.583187 | orchestrator | 2026-03-17 00:58:50.583193 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-17 00:58:50.583198 | orchestrator | Tuesday 17 March 2026 00:56:42 +0000 (0:00:01.648) 0:01:12.182 ********* 2026-03-17 00:58:50.583204 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.583214 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.583220 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.583226 | orchestrator | 2026-03-17 00:58:50.583232 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-17 00:58:50.583238 | orchestrator | Tuesday 17 March 2026 00:56:44 +0000 (0:00:01.860) 0:01:14.042 ********* 2026-03-17 00:58:50.583249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583342 | orchestrator | 2026-03-17 00:58:50.583349 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-17 00:58:50.583356 | orchestrator | Tuesday 17 March 2026 00:56:47 +0000 (0:00:03.587) 0:01:17.630 ********* 2026-03-17 00:58:50.583364 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 00:58:50.583369 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.583375 | orchestrator | } 2026-03-17 00:58:50.583381 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:58:50.583386 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.583397 | orchestrator | } 2026-03-17 00:58:50.583403 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:58:50.583408 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.583414 | orchestrator | } 2026-03-17 00:58:50.583419 | orchestrator | 2026-03-17 00:58:50.583425 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 00:58:50.583431 | orchestrator | Tuesday 17 March 2026 00:56:48 +0000 (0:00:00.321) 0:01:17.951 ********* 2026-03-17 00:58:50.583438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583496 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583500 | orchestrator | 2026-03-17 00:58:50.583504 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-17 00:58:50.583510 | orchestrator | Tuesday 17 March 2026 00:56:49 +0000 (0:00:01.710) 0:01:19.662 ********* 2026-03-17 00:58:50.583514 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-17 00:58:50.583519 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-17 00:58:50.583522 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-17 00:58:50.583526 | orchestrator | 2026-03-17 00:58:50.583530 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-17 00:58:50.583534 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:01.190) 0:01:20.853 ********* 2026-03-17 00:58:50.583538 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 00:58:50.583541 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.583545 | orchestrator | } 2026-03-17 00:58:50.583549 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:58:50.583553 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.583557 | orchestrator | } 2026-03-17 00:58:50.583560 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:58:50.583564 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.583571 | orchestrator | } 2026-03-17 00:58:50.583575 | orchestrator | 2026-03-17 00:58:50.583579 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:58:50.583583 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:00.523) 0:01:21.377 ********* 2026-03-17 00:58:50.583587 | orchestrator | 2026-03-17 00:58:50.583590 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:58:50.583594 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:00.064) 0:01:21.441 ********* 2026-03-17 00:58:50.583598 | orchestrator | 2026-03-17 00:58:50.583602 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:58:50.583606 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:00.059) 0:01:21.501 ********* 2026-03-17 00:58:50.583609 | orchestrator | 2026-03-17 00:58:50.583613 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-17 00:58:50.583621 | orchestrator | Tuesday 17 March 2026 00:56:51 +0000 (0:00:00.063) 0:01:21.565 ********* 2026-03-17 00:58:50.583625 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.583629 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.583632 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.583636 | orchestrator | 2026-03-17 00:58:50.583640 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-17 00:58:50.583644 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:08.213) 0:01:29.779 ********* 2026-03-17 00:58:50.583648 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.583651 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.583655 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.583659 | orchestrator | 2026-03-17 00:58:50.583663 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-17 00:58:50.583667 | orchestrator | Tuesday 17 March 2026 00:57:10 +0000 (0:00:10.381) 0:01:40.161 ********* 2026-03-17 00:58:50.583670 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-17 00:58:50.583674 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-17 00:58:50.583678 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-17 00:58:50.583682 | orchestrator | 2026-03-17 00:58:50.583686 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-17 00:58:50.583690 | orchestrator | Tuesday 17 March 2026 00:57:22 +0000 (0:00:11.926) 0:01:52.087 ********* 2026-03-17 00:58:50.583694 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.583698 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.583702 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.583705 | orchestrator | 2026-03-17 00:58:50.583709 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-17 00:58:50.583713 | orchestrator | Tuesday 17 March 2026 00:57:36 +0000 (0:00:14.441) 0:02:06.529 ********* 2026-03-17 00:58:50.583717 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.583721 | orchestrator | 2026-03-17 00:58:50.583724 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-17 00:58:50.583728 | orchestrator | Tuesday 17 March 2026 00:57:36 +0000 (0:00:00.106) 0:02:06.635 ********* 2026-03-17 00:58:50.583732 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.583736 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.583740 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.583744 | orchestrator | 2026-03-17 00:58:50.583748 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-17 00:58:50.583752 | orchestrator | Tuesday 17 March 2026 00:57:37 +0000 (0:00:00.957) 0:02:07.592 ********* 2026-03-17 00:58:50.583755 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.583759 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.583763 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.583767 | orchestrator | 2026-03-17 00:58:50.583771 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-17 00:58:50.583774 | orchestrator | Tuesday 17 March 2026 00:57:38 +0000 (0:00:00.605) 0:02:08.198 ********* 2026-03-17 00:58:50.583778 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.583782 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.583786 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.583790 | orchestrator | 2026-03-17 00:58:50.583794 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-17 00:58:50.583798 | orchestrator | Tuesday 17 March 2026 00:57:39 +0000 (0:00:00.687) 0:02:08.885 ********* 2026-03-17 00:58:50.583801 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.583805 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.583809 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.583813 | orchestrator | 2026-03-17 00:58:50.583817 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-17 00:58:50.583821 | orchestrator | Tuesday 17 March 2026 00:57:39 +0000 (0:00:00.624) 0:02:09.509 ********* 2026-03-17 00:58:50.583824 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.583832 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.583836 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.583840 | orchestrator | 2026-03-17 00:58:50.583844 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-17 00:58:50.583848 | orchestrator | Tuesday 17 March 2026 00:57:41 +0000 (0:00:01.426) 0:02:10.936 ********* 2026-03-17 00:58:50.583852 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.583856 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.583863 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.583867 | orchestrator | 2026-03-17 00:58:50.583871 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-17 00:58:50.583874 | orchestrator | Tuesday 17 March 2026 00:57:42 +0000 (0:00:01.022) 0:02:11.959 ********* 2026-03-17 00:58:50.583878 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-17 00:58:50.583882 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-17 00:58:50.583886 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-17 00:58:50.583890 | orchestrator | 2026-03-17 00:58:50.583893 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-17 00:58:50.583897 | orchestrator | Tuesday 17 March 2026 00:57:43 +0000 (0:00:00.808) 0:02:12.768 ********* 2026-03-17 00:58:50.583901 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.583905 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.583909 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.583913 | orchestrator | 2026-03-17 00:58:50.583917 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-17 00:58:50.583947 | orchestrator | Tuesday 17 March 2026 00:57:43 +0000 (0:00:00.284) 0:02:13.052 ********* 2026-03-17 00:58:50.583952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583956 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583960 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583964 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583969 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583977 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583984 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.583991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.583996 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584004 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584012 | orchestrator | 2026-03-17 00:58:50.584015 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-17 00:58:50.584023 | orchestrator | Tuesday 17 March 2026 00:57:46 +0000 (0:00:03.088) 0:02:16.141 ********* 2026-03-17 00:58:50.584027 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584031 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584038 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584047 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584059 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584093 | orchestrator | 2026-03-17 00:58:50.584097 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-17 00:58:50.584100 | orchestrator | Tuesday 17 March 2026 00:57:50 +0000 (0:00:04.303) 0:02:20.444 ********* 2026-03-17 00:58:50.584104 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-17 00:58:50.584108 | orchestrator | 2026-03-17 00:58:50.584112 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-17 00:58:50.584116 | orchestrator | Tuesday 17 March 2026 00:57:51 +0000 (0:00:00.486) 0:02:20.931 ********* 2026-03-17 00:58:50.584120 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.584123 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.584127 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.584131 | orchestrator | 2026-03-17 00:58:50.584135 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-17 00:58:50.584138 | orchestrator | Tuesday 17 March 2026 00:57:51 +0000 (0:00:00.680) 0:02:21.611 ********* 2026-03-17 00:58:50.584142 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.584146 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.584150 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.584154 | orchestrator | 2026-03-17 00:58:50.584157 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-17 00:58:50.584161 | orchestrator | Tuesday 17 March 2026 00:57:53 +0000 (0:00:01.649) 0:02:23.260 ********* 2026-03-17 00:58:50.584165 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.584174 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.584178 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.584182 | orchestrator | 2026-03-17 00:58:50.584185 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-17 00:58:50.584189 | orchestrator | Tuesday 17 March 2026 00:57:55 +0000 (0:00:01.526) 0:02:24.787 ********* 2026-03-17 00:58:50.584193 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584200 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584207 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584214 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584248 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584291 | orchestrator | 2026-03-17 00:58:50.584298 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-17 00:58:50.584305 | orchestrator | Tuesday 17 March 2026 00:57:59 +0000 (0:00:04.573) 0:02:29.360 ********* 2026-03-17 00:58:50.584311 | orchestrator | ok: [testbed-node-0] => { 2026-03-17 00:58:50.584317 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.584323 | orchestrator | } 2026-03-17 00:58:50.584329 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:58:50.584335 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.584341 | orchestrator | } 2026-03-17 00:58:50.584347 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:58:50.584354 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.584360 | orchestrator | } 2026-03-17 00:58:50.584366 | orchestrator | 2026-03-17 00:58:50.584372 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 00:58:50.584379 | orchestrator | Tuesday 17 March 2026 00:57:59 +0000 (0:00:00.295) 0:02:29.656 ********* 2026-03-17 00:58:50.584392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 00:58:50.584508 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-2, testbed-node-1, testbed-node-0 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 00:58:50.584514 | orchestrator | 2026-03-17 00:58:50.584520 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-17 00:58:50.584526 | orchestrator | Tuesday 17 March 2026 00:58:02 +0000 (0:00:02.354) 0:02:32.010 ********* 2026-03-17 00:58:50.584533 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-17 00:58:50.584540 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-17 00:58:50.584546 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-17 00:58:50.584552 | orchestrator | 2026-03-17 00:58:50.584558 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-17 00:58:50.584565 | orchestrator | Tuesday 17 March 2026 00:58:03 +0000 (0:00:00.816) 0:02:32.827 ********* 2026-03-17 00:58:50.584572 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 00:58:50.584578 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.584584 | orchestrator | } 2026-03-17 00:58:50.584590 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 00:58:50.584596 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.584603 | orchestrator | } 2026-03-17 00:58:50.584609 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 00:58:50.584616 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 00:58:50.584622 | orchestrator | } 2026-03-17 00:58:50.584628 | orchestrator | 2026-03-17 00:58:50.584633 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:58:50.584639 | orchestrator | Tuesday 17 March 2026 00:58:03 +0000 (0:00:00.659) 0:02:33.486 ********* 2026-03-17 00:58:50.584645 | orchestrator | 2026-03-17 00:58:50.584654 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:58:50.584660 | orchestrator | Tuesday 17 March 2026 00:58:03 +0000 (0:00:00.080) 0:02:33.567 ********* 2026-03-17 00:58:50.584666 | orchestrator | 2026-03-17 00:58:50.584672 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-17 00:58:50.584677 | orchestrator | Tuesday 17 March 2026 00:58:03 +0000 (0:00:00.064) 0:02:33.632 ********* 2026-03-17 00:58:50.584683 | orchestrator | 2026-03-17 00:58:50.584689 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-17 00:58:50.584694 | orchestrator | Tuesday 17 March 2026 00:58:03 +0000 (0:00:00.063) 0:02:33.695 ********* 2026-03-17 00:58:50.584700 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.584707 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.584713 | orchestrator | 2026-03-17 00:58:50.584719 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-17 00:58:50.584726 | orchestrator | Tuesday 17 March 2026 00:58:15 +0000 (0:00:11.948) 0:02:45.643 ********* 2026-03-17 00:58:50.584732 | orchestrator | changed: [testbed-node-2] 2026-03-17 00:58:50.584738 | orchestrator | changed: [testbed-node-1] 2026-03-17 00:58:50.584744 | orchestrator | 2026-03-17 00:58:50.584750 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-17 00:58:50.584756 | orchestrator | Tuesday 17 March 2026 00:58:28 +0000 (0:00:12.119) 0:02:57.762 ********* 2026-03-17 00:58:50.584768 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-17 00:58:50.584774 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-17 00:58:50.584780 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-17 00:58:50.584783 | orchestrator | 2026-03-17 00:58:50.584787 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-17 00:58:50.584791 | orchestrator | Tuesday 17 March 2026 00:58:42 +0000 (0:00:14.964) 0:03:12.726 ********* 2026-03-17 00:58:50.584795 | orchestrator | skipping: [testbed-node-0] 2026-03-17 00:58:50.584799 | orchestrator | 2026-03-17 00:58:50.584802 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-17 00:58:50.584806 | orchestrator | Tuesday 17 March 2026 00:58:43 +0000 (0:00:00.107) 0:03:12.834 ********* 2026-03-17 00:58:50.584810 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.584814 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.584822 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.584827 | orchestrator | 2026-03-17 00:58:50.584832 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-17 00:58:50.584839 | orchestrator | Tuesday 17 March 2026 00:58:43 +0000 (0:00:00.829) 0:03:13.663 ********* 2026-03-17 00:58:50.584848 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.584854 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.584860 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.584866 | orchestrator | 2026-03-17 00:58:50.584871 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-17 00:58:50.584877 | orchestrator | Tuesday 17 March 2026 00:58:44 +0000 (0:00:00.522) 0:03:14.186 ********* 2026-03-17 00:58:50.584883 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.584888 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.584894 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.584901 | orchestrator | 2026-03-17 00:58:50.584907 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-17 00:58:50.584919 | orchestrator | Tuesday 17 March 2026 00:58:45 +0000 (0:00:00.666) 0:03:14.853 ********* 2026-03-17 00:58:50.584968 | orchestrator | skipping: [testbed-node-1] 2026-03-17 00:58:50.584975 | orchestrator | skipping: [testbed-node-2] 2026-03-17 00:58:50.584981 | orchestrator | changed: [testbed-node-0] 2026-03-17 00:58:50.584987 | orchestrator | 2026-03-17 00:58:50.584994 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-17 00:58:50.584998 | orchestrator | Tuesday 17 March 2026 00:58:45 +0000 (0:00:00.526) 0:03:15.379 ********* 2026-03-17 00:58:50.585001 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.585005 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.585009 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.585013 | orchestrator | 2026-03-17 00:58:50.585017 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-17 00:58:50.585020 | orchestrator | Tuesday 17 March 2026 00:58:46 +0000 (0:00:01.177) 0:03:16.557 ********* 2026-03-17 00:58:50.585024 | orchestrator | ok: [testbed-node-0] 2026-03-17 00:58:50.585028 | orchestrator | ok: [testbed-node-1] 2026-03-17 00:58:50.585032 | orchestrator | ok: [testbed-node-2] 2026-03-17 00:58:50.585036 | orchestrator | 2026-03-17 00:58:50.585039 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-17 00:58:50.585043 | orchestrator | Tuesday 17 March 2026 00:58:47 +0000 (0:00:00.792) 0:03:17.349 ********* 2026-03-17 00:58:50.585047 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-17 00:58:50.585051 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-17 00:58:50.585055 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-17 00:58:50.585058 | orchestrator | 2026-03-17 00:58:50.585062 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 00:58:50.585067 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-17 00:58:50.585072 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-03-17 00:58:50.585082 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-03-17 00:58:50.585086 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:58:50.585089 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:58:50.585093 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 00:58:50.585097 | orchestrator | 2026-03-17 00:58:50.585101 | orchestrator | 2026-03-17 00:58:50.585105 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 00:58:50.585109 | orchestrator | Tuesday 17 March 2026 00:58:48 +0000 (0:00:01.148) 0:03:18.498 ********* 2026-03-17 00:58:50.585112 | orchestrator | =============================================================================== 2026-03-17 00:58:50.585116 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 26.89s 2026-03-17 00:58:50.585120 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 22.50s 2026-03-17 00:58:50.585124 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 20.16s 2026-03-17 00:58:50.585128 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.43s 2026-03-17 00:58:50.585131 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.44s 2026-03-17 00:58:50.585135 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 7.93s 2026-03-17 00:58:50.585139 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.42s 2026-03-17 00:58:50.585143 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.57s 2026-03-17 00:58:50.585147 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.30s 2026-03-17 00:58:50.585150 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.03s 2026-03-17 00:58:50.585154 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 3.59s 2026-03-17 00:58:50.585158 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.09s 2026-03-17 00:58:50.585162 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.94s 2026-03-17 00:58:50.585165 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.35s 2026-03-17 00:58:50.585174 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.20s 2026-03-17 00:58:50.585177 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 2.08s 2026-03-17 00:58:50.585181 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.86s 2026-03-17 00:58:50.585185 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.76s 2026-03-17 00:58:50.585191 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.71s 2026-03-17 00:58:50.585197 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.65s 2026-03-17 00:58:50.585202 | orchestrator | 2026-03-17 00:58:50 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:50.585211 | orchestrator | 2026-03-17 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:53.614542 | orchestrator | 2026-03-17 00:58:53 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:53.615859 | orchestrator | 2026-03-17 00:58:53 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:53.615903 | orchestrator | 2026-03-17 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:56.667692 | orchestrator | 2026-03-17 00:58:56 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:56.668211 | orchestrator | 2026-03-17 00:58:56 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:56.668246 | orchestrator | 2026-03-17 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:58:59.698954 | orchestrator | 2026-03-17 00:58:59 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:58:59.700542 | orchestrator | 2026-03-17 00:58:59 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:58:59.700826 | orchestrator | 2026-03-17 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:02.726549 | orchestrator | 2026-03-17 00:59:02 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:02.727244 | orchestrator | 2026-03-17 00:59:02 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:02.727273 | orchestrator | 2026-03-17 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:05.762580 | orchestrator | 2026-03-17 00:59:05 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:05.764732 | orchestrator | 2026-03-17 00:59:05 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:05.764828 | orchestrator | 2026-03-17 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:08.807580 | orchestrator | 2026-03-17 00:59:08 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:08.808304 | orchestrator | 2026-03-17 00:59:08 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:08.808336 | orchestrator | 2026-03-17 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:11.850586 | orchestrator | 2026-03-17 00:59:11 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:11.853507 | orchestrator | 2026-03-17 00:59:11 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:11.853556 | orchestrator | 2026-03-17 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:14.903808 | orchestrator | 2026-03-17 00:59:14 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:14.905410 | orchestrator | 2026-03-17 00:59:14 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:14.905463 | orchestrator | 2026-03-17 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:17.944502 | orchestrator | 2026-03-17 00:59:17 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:17.946280 | orchestrator | 2026-03-17 00:59:17 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:17.946329 | orchestrator | 2026-03-17 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:20.997856 | orchestrator | 2026-03-17 00:59:20 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:21.006193 | orchestrator | 2026-03-17 00:59:21 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:21.006240 | orchestrator | 2026-03-17 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:24.049466 | orchestrator | 2026-03-17 00:59:24 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:24.050753 | orchestrator | 2026-03-17 00:59:24 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:24.051162 | orchestrator | 2026-03-17 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:27.104690 | orchestrator | 2026-03-17 00:59:27 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:27.105639 | orchestrator | 2026-03-17 00:59:27 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:27.105685 | orchestrator | 2026-03-17 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:30.135259 | orchestrator | 2026-03-17 00:59:30 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:30.135320 | orchestrator | 2026-03-17 00:59:30 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:30.135328 | orchestrator | 2026-03-17 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:33.169577 | orchestrator | 2026-03-17 00:59:33 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:33.170176 | orchestrator | 2026-03-17 00:59:33 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:33.170204 | orchestrator | 2026-03-17 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:36.209182 | orchestrator | 2026-03-17 00:59:36 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:36.211100 | orchestrator | 2026-03-17 00:59:36 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:36.211153 | orchestrator | 2026-03-17 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:39.254856 | orchestrator | 2026-03-17 00:59:39 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:39.255759 | orchestrator | 2026-03-17 00:59:39 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:39.255788 | orchestrator | 2026-03-17 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:42.292452 | orchestrator | 2026-03-17 00:59:42 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:42.293192 | orchestrator | 2026-03-17 00:59:42 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:42.293225 | orchestrator | 2026-03-17 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:45.342535 | orchestrator | 2026-03-17 00:59:45 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:45.342766 | orchestrator | 2026-03-17 00:59:45 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:45.342784 | orchestrator | 2026-03-17 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:48.377974 | orchestrator | 2026-03-17 00:59:48 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:48.378209 | orchestrator | 2026-03-17 00:59:48 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:48.378567 | orchestrator | 2026-03-17 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:51.422746 | orchestrator | 2026-03-17 00:59:51 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:51.424291 | orchestrator | 2026-03-17 00:59:51 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:51.424328 | orchestrator | 2026-03-17 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:54.465120 | orchestrator | 2026-03-17 00:59:54 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:54.467318 | orchestrator | 2026-03-17 00:59:54 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:54.467384 | orchestrator | 2026-03-17 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 00:59:57.515155 | orchestrator | 2026-03-17 00:59:57 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 00:59:57.516496 | orchestrator | 2026-03-17 00:59:57 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 00:59:57.516557 | orchestrator | 2026-03-17 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:00.550244 | orchestrator | 2026-03-17 01:00:00 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:00.552556 | orchestrator | 2026-03-17 01:00:00 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:00.553086 | orchestrator | 2026-03-17 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:03.603667 | orchestrator | 2026-03-17 01:00:03 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:03.603754 | orchestrator | 2026-03-17 01:00:03 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:03.603764 | orchestrator | 2026-03-17 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:06.644244 | orchestrator | 2026-03-17 01:00:06 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:06.644736 | orchestrator | 2026-03-17 01:00:06 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:06.644806 | orchestrator | 2026-03-17 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:09.686800 | orchestrator | 2026-03-17 01:00:09 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:09.689577 | orchestrator | 2026-03-17 01:00:09 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:09.690130 | orchestrator | 2026-03-17 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:12.729534 | orchestrator | 2026-03-17 01:00:12 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:12.731566 | orchestrator | 2026-03-17 01:00:12 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:12.731617 | orchestrator | 2026-03-17 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:15.775605 | orchestrator | 2026-03-17 01:00:15 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:15.775695 | orchestrator | 2026-03-17 01:00:15 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:15.775705 | orchestrator | 2026-03-17 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:18.815112 | orchestrator | 2026-03-17 01:00:18 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:18.816027 | orchestrator | 2026-03-17 01:00:18 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:18.816143 | orchestrator | 2026-03-17 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:21.855173 | orchestrator | 2026-03-17 01:00:21 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:21.855904 | orchestrator | 2026-03-17 01:00:21 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:21.855964 | orchestrator | 2026-03-17 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:24.906228 | orchestrator | 2026-03-17 01:00:24 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:24.906915 | orchestrator | 2026-03-17 01:00:24 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:24.906938 | orchestrator | 2026-03-17 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:27.949239 | orchestrator | 2026-03-17 01:00:27 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:27.952614 | orchestrator | 2026-03-17 01:00:27 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:27.952661 | orchestrator | 2026-03-17 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:30.991983 | orchestrator | 2026-03-17 01:00:30 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:30.993644 | orchestrator | 2026-03-17 01:00:30 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:30.993686 | orchestrator | 2026-03-17 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:34.038250 | orchestrator | 2026-03-17 01:00:34 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:34.040212 | orchestrator | 2026-03-17 01:00:34 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:34.040282 | orchestrator | 2026-03-17 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:37.075807 | orchestrator | 2026-03-17 01:00:37 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:37.077056 | orchestrator | 2026-03-17 01:00:37 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:37.077375 | orchestrator | 2026-03-17 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:40.117371 | orchestrator | 2026-03-17 01:00:40 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:40.117858 | orchestrator | 2026-03-17 01:00:40 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state STARTED 2026-03-17 01:00:40.117908 | orchestrator | 2026-03-17 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:43.159129 | orchestrator | 2026-03-17 01:00:43 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:43.165256 | orchestrator | 2026-03-17 01:00:43 | INFO  | Task 88ad0e7c-1b54-4f7f-b8a9-c5c023c7b4ca is in state SUCCESS 2026-03-17 01:00:43.168766 | orchestrator | 2026-03-17 01:00:43.168863 | orchestrator | 2026-03-17 01:00:43.168875 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:00:43.168882 | orchestrator | 2026-03-17 01:00:43.168889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:00:43.168897 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.591) 0:00:00.591 ********* 2026-03-17 01:00:43.168903 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.168910 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.168917 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.168924 | orchestrator | 2026-03-17 01:00:43.168930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:00:43.168937 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:00.403) 0:00:00.994 ********* 2026-03-17 01:00:43.168944 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-17 01:00:43.168951 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-17 01:00:43.169005 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-17 01:00:43.169012 | orchestrator | 2026-03-17 01:00:43.169018 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-17 01:00:43.170176 | orchestrator | 2026-03-17 01:00:43.170218 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-17 01:00:43.170225 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:00.541) 0:00:01.536 ********* 2026-03-17 01:00:43.170230 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.170234 | orchestrator | 2026-03-17 01:00:43.170238 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-17 01:00:43.170242 | orchestrator | Tuesday 17 March 2026 00:54:25 +0000 (0:00:01.053) 0:00:02.589 ********* 2026-03-17 01:00:43.170246 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.170251 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.170255 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.170259 | orchestrator | 2026-03-17 01:00:43.170263 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-17 01:00:43.170266 | orchestrator | Tuesday 17 March 2026 00:54:27 +0000 (0:00:01.123) 0:00:03.713 ********* 2026-03-17 01:00:43.170271 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.170274 | orchestrator | 2026-03-17 01:00:43.170278 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-17 01:00:43.170282 | orchestrator | Tuesday 17 March 2026 00:54:27 +0000 (0:00:00.682) 0:00:04.396 ********* 2026-03-17 01:00:43.170286 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.170290 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.170294 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.170297 | orchestrator | 2026-03-17 01:00:43.170301 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-17 01:00:43.170305 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:01.001) 0:00:05.397 ********* 2026-03-17 01:00:43.170309 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-17 01:00:43.170313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-17 01:00:43.170317 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-17 01:00:43.170321 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-17 01:00:43.170324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-17 01:00:43.170328 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-17 01:00:43.170332 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-17 01:00:43.170337 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-17 01:00:43.170340 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-17 01:00:43.170344 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-17 01:00:43.170348 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-17 01:00:43.170352 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-17 01:00:43.170355 | orchestrator | 2026-03-17 01:00:43.170359 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-17 01:00:43.170363 | orchestrator | Tuesday 17 March 2026 00:54:32 +0000 (0:00:04.190) 0:00:09.587 ********* 2026-03-17 01:00:43.170367 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-17 01:00:43.170383 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-17 01:00:43.170387 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-17 01:00:43.170391 | orchestrator | 2026-03-17 01:00:43.170395 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-17 01:00:43.170410 | orchestrator | Tuesday 17 March 2026 00:54:34 +0000 (0:00:01.246) 0:00:10.834 ********* 2026-03-17 01:00:43.170414 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-17 01:00:43.170427 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-17 01:00:43.170430 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-17 01:00:43.170441 | orchestrator | 2026-03-17 01:00:43.170445 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-17 01:00:43.170448 | orchestrator | Tuesday 17 March 2026 00:54:35 +0000 (0:00:01.707) 0:00:12.541 ********* 2026-03-17 01:00:43.170452 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-17 01:00:43.170456 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.170473 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-17 01:00:43.170477 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.170481 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-17 01:00:43.170485 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.170494 | orchestrator | 2026-03-17 01:00:43.170498 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-17 01:00:43.170502 | orchestrator | Tuesday 17 March 2026 00:54:36 +0000 (0:00:00.908) 0:00:13.450 ********* 2026-03-17 01:00:43.170508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.170516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.170520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.170524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.170532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.170585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.170592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.170596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.170600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.170604 | orchestrator | 2026-03-17 01:00:43.170608 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-17 01:00:43.170612 | orchestrator | Tuesday 17 March 2026 00:54:39 +0000 (0:00:02.649) 0:00:16.100 ********* 2026-03-17 01:00:43.170616 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.170620 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.170624 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.170627 | orchestrator | 2026-03-17 01:00:43.170631 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-17 01:00:43.170635 | orchestrator | Tuesday 17 March 2026 00:54:40 +0000 (0:00:01.449) 0:00:17.550 ********* 2026-03-17 01:00:43.170639 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-17 01:00:43.170643 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-17 01:00:43.170646 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-17 01:00:43.170650 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-17 01:00:43.170654 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-17 01:00:43.170658 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-17 01:00:43.170662 | orchestrator | 2026-03-17 01:00:43.170665 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-17 01:00:43.170673 | orchestrator | Tuesday 17 March 2026 00:54:43 +0000 (0:00:02.708) 0:00:20.258 ********* 2026-03-17 01:00:43.170677 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.170681 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.170684 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.170688 | orchestrator | 2026-03-17 01:00:43.170692 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-17 01:00:43.170696 | orchestrator | Tuesday 17 March 2026 00:54:44 +0000 (0:00:01.088) 0:00:21.347 ********* 2026-03-17 01:00:43.170700 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.170704 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.170707 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.170712 | orchestrator | 2026-03-17 01:00:43.170715 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-17 01:00:43.170770 | orchestrator | Tuesday 17 March 2026 00:54:46 +0000 (0:00:01.982) 0:00:23.329 ********* 2026-03-17 01:00:43.170777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.170794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.170799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.170804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 01:00:43.170809 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.170813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.170841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.170846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.170854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 01:00:43.170858 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.171134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.171158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.171164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.171180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 01:00:43.171189 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.171196 | orchestrator | 2026-03-17 01:00:43.171201 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-17 01:00:43.171207 | orchestrator | Tuesday 17 March 2026 00:54:48 +0000 (0:00:01.814) 0:00:25.144 ********* 2026-03-17 01:00:43.171214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.171343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 01:00:43.171350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.171366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 01:00:43.171389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.171401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04', '__omit_place_holder__32e1e3809b17f711667a819822859a63aa71dd04'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-17 01:00:43.171427 | orchestrator | 2026-03-17 01:00:43.171435 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-17 01:00:43.171441 | orchestrator | Tuesday 17 March 2026 00:54:53 +0000 (0:00:04.636) 0:00:29.780 ********* 2026-03-17 01:00:43.171447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.171542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.171549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.171555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.171561 | orchestrator | 2026-03-17 01:00:43.171567 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-17 01:00:43.171574 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:03.710) 0:00:33.490 ********* 2026-03-17 01:00:43.171581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-17 01:00:43.171592 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-17 01:00:43.171598 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-17 01:00:43.171605 | orchestrator | 2026-03-17 01:00:43.171612 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-17 01:00:43.171618 | orchestrator | Tuesday 17 March 2026 00:54:59 +0000 (0:00:02.482) 0:00:35.972 ********* 2026-03-17 01:00:43.171625 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-17 01:00:43.171632 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-17 01:00:43.171638 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-17 01:00:43.171644 | orchestrator | 2026-03-17 01:00:43.171671 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-17 01:00:43.171677 | orchestrator | Tuesday 17 March 2026 00:55:03 +0000 (0:00:04.357) 0:00:40.330 ********* 2026-03-17 01:00:43.171683 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.171689 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.171700 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.171706 | orchestrator | 2026-03-17 01:00:43.171712 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-17 01:00:43.171718 | orchestrator | Tuesday 17 March 2026 00:55:04 +0000 (0:00:00.953) 0:00:41.283 ********* 2026-03-17 01:00:43.171723 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-17 01:00:43.171730 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-17 01:00:43.171737 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-17 01:00:43.171743 | orchestrator | 2026-03-17 01:00:43.171748 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-17 01:00:43.171754 | orchestrator | Tuesday 17 March 2026 00:55:06 +0000 (0:00:02.409) 0:00:43.693 ********* 2026-03-17 01:00:43.171761 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-17 01:00:43.171768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-17 01:00:43.171775 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-17 01:00:43.171804 | orchestrator | 2026-03-17 01:00:43.171810 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-17 01:00:43.171815 | orchestrator | Tuesday 17 March 2026 00:55:08 +0000 (0:00:01.993) 0:00:45.686 ********* 2026-03-17 01:00:43.171820 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.171923 | orchestrator | 2026-03-17 01:00:43.171928 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-17 01:00:43.171932 | orchestrator | Tuesday 17 March 2026 00:55:09 +0000 (0:00:00.529) 0:00:46.216 ********* 2026-03-17 01:00:43.171937 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-17 01:00:43.171941 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-17 01:00:43.171946 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-17 01:00:43.171951 | orchestrator | 2026-03-17 01:00:43.171955 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-17 01:00:43.171960 | orchestrator | Tuesday 17 March 2026 00:55:11 +0000 (0:00:02.430) 0:00:48.647 ********* 2026-03-17 01:00:43.171964 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-17 01:00:43.172506 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-17 01:00:43.172546 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-17 01:00:43.172554 | orchestrator | 2026-03-17 01:00:43.172560 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-17 01:00:43.172566 | orchestrator | Tuesday 17 March 2026 00:55:14 +0000 (0:00:02.127) 0:00:50.775 ********* 2026-03-17 01:00:43.172572 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.172577 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.172583 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.172589 | orchestrator | 2026-03-17 01:00:43.172595 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-17 01:00:43.172601 | orchestrator | Tuesday 17 March 2026 00:55:14 +0000 (0:00:00.310) 0:00:51.085 ********* 2026-03-17 01:00:43.172608 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.172613 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.172620 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.172626 | orchestrator | 2026-03-17 01:00:43.172633 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-17 01:00:43.172639 | orchestrator | Tuesday 17 March 2026 00:55:14 +0000 (0:00:00.287) 0:00:51.373 ********* 2026-03-17 01:00:43.172662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.173640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.173665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.173672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.173680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.173687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.173694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.173717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.173783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.173793 | orchestrator | 2026-03-17 01:00:43.173799 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-17 01:00:43.173806 | orchestrator | Tuesday 17 March 2026 00:55:18 +0000 (0:00:03.773) 0:00:55.146 ********* 2026-03-17 01:00:43.173812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.173819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.173840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.173846 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.173853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.173890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.173897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.173904 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.173942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.173953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.173959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.173965 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.173972 | orchestrator | 2026-03-17 01:00:43.173978 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-17 01:00:43.173983 | orchestrator | Tuesday 17 March 2026 00:55:19 +0000 (0:00:01.309) 0:00:56.456 ********* 2026-03-17 01:00:43.173989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.174000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.174010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.174057 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.174106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.174115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.174122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.174128 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.174134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.174146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.174154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.174160 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.174166 | orchestrator | 2026-03-17 01:00:43.174173 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-17 01:00:43.174179 | orchestrator | Tuesday 17 March 2026 00:55:21 +0000 (0:00:01.332) 0:00:57.788 ********* 2026-03-17 01:00:43.174188 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-17 01:00:43.174195 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-17 01:00:43.174201 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-17 01:00:43.174207 | orchestrator | 2026-03-17 01:00:43.174213 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-17 01:00:43.174219 | orchestrator | Tuesday 17 March 2026 00:55:22 +0000 (0:00:01.609) 0:00:59.397 ********* 2026-03-17 01:00:43.174225 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-17 01:00:43.174333 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-17 01:00:43.174343 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-17 01:00:43.174349 | orchestrator | 2026-03-17 01:00:43.174355 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-17 01:00:43.174412 | orchestrator | Tuesday 17 March 2026 00:55:23 +0000 (0:00:01.238) 0:01:00.636 ********* 2026-03-17 01:00:43.174419 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:00:43.174425 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:00:43.174448 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:00:43.174455 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:00:43.174461 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.174467 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:00:43.174472 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.174478 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:00:43.174483 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.174489 | orchestrator | 2026-03-17 01:00:43.174495 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-17 01:00:43.174501 | orchestrator | Tuesday 17 March 2026 00:55:24 +0000 (0:00:00.783) 0:01:01.420 ********* 2026-03-17 01:00:43.174516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.174789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.174802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.174809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.174918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.174928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.174935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.174954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.174960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.174967 | orchestrator | 2026-03-17 01:00:43.174973 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-17 01:00:43.174979 | orchestrator | Tuesday 17 March 2026 00:55:26 +0000 (0:00:02.167) 0:01:03.587 ********* 2026-03-17 01:00:43.174985 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:00:43.174991 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:00:43.174997 | orchestrator | } 2026-03-17 01:00:43.175002 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:00:43.175008 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:00:43.175014 | orchestrator | } 2026-03-17 01:00:43.175019 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:00:43.175038 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:00:43.175044 | orchestrator | } 2026-03-17 01:00:43.175051 | orchestrator | 2026-03-17 01:00:43.175057 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:00:43.175063 | orchestrator | Tuesday 17 March 2026 00:55:27 +0000 (0:00:00.389) 0:01:03.976 ********* 2026-03-17 01:00:43.175072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.175124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.175131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.175145 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.175151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.175157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.175163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.175169 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.175175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.175184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.175378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.175402 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.175408 | orchestrator | 2026-03-17 01:00:43.175415 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-17 01:00:43.175421 | orchestrator | Tuesday 17 March 2026 00:55:28 +0000 (0:00:01.042) 0:01:05.019 ********* 2026-03-17 01:00:43.175427 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.175433 | orchestrator | 2026-03-17 01:00:43.175439 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-17 01:00:43.175445 | orchestrator | Tuesday 17 March 2026 00:55:29 +0000 (0:00:00.717) 0:01:05.736 ********* 2026-03-17 01:00:43.175453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.175462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.175469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.175481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.175563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.175592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.175643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175663 | orchestrator | 2026-03-17 01:00:43.175669 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-17 01:00:43.175675 | orchestrator | Tuesday 17 March 2026 00:55:33 +0000 (0:00:04.209) 0:01:09.946 ********* 2026-03-17 01:00:43.175682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.175688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.175694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175716 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.175768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.175777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.175783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175797 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.175803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.175818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.175878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.175892 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.175898 | orchestrator | 2026-03-17 01:00:43.175903 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-17 01:00:43.175943 | orchestrator | Tuesday 17 March 2026 00:55:33 +0000 (0:00:00.724) 0:01:10.670 ********* 2026-03-17 01:00:43.175951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.175959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.175966 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.175972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.176262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.176270 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.176277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.176283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.176290 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.176295 | orchestrator | 2026-03-17 01:00:43.176301 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-17 01:00:43.176316 | orchestrator | Tuesday 17 March 2026 00:55:35 +0000 (0:00:01.232) 0:01:11.903 ********* 2026-03-17 01:00:43.176323 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.176329 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.176335 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.176341 | orchestrator | 2026-03-17 01:00:43.176347 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-17 01:00:43.176353 | orchestrator | Tuesday 17 March 2026 00:55:36 +0000 (0:00:01.099) 0:01:13.003 ********* 2026-03-17 01:00:43.176360 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.176366 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.176372 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.176378 | orchestrator | 2026-03-17 01:00:43.176383 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-17 01:00:43.176395 | orchestrator | Tuesday 17 March 2026 00:55:38 +0000 (0:00:02.204) 0:01:15.207 ********* 2026-03-17 01:00:43.176401 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.176407 | orchestrator | 2026-03-17 01:00:43.176412 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-17 01:00:43.176418 | orchestrator | Tuesday 17 March 2026 00:55:39 +0000 (0:00:00.648) 0:01:15.855 ********* 2026-03-17 01:00:43.176494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.176505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.176512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.176519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.176538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.176573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.176581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.176588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.176595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.176606 | orchestrator | 2026-03-17 01:00:43.176612 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-17 01:00:43.176619 | orchestrator | Tuesday 17 March 2026 00:55:44 +0000 (0:00:05.088) 0:01:20.944 ********* 2026-03-17 01:00:43.176629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.177242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.177914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.177922 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.177931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.177947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.177953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.177959 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.178057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.178068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.178072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.178076 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.178080 | orchestrator | 2026-03-17 01:00:43.178084 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-17 01:00:43.178537 | orchestrator | Tuesday 17 March 2026 00:55:44 +0000 (0:00:00.536) 0:01:21.480 ********* 2026-03-17 01:00:43.178558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.178564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.178569 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.178573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.178577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.178581 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.178585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.178589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.178593 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.178597 | orchestrator | 2026-03-17 01:00:43.178600 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-17 01:00:43.178607 | orchestrator | Tuesday 17 March 2026 00:55:45 +0000 (0:00:00.669) 0:01:22.150 ********* 2026-03-17 01:00:43.178611 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.178615 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.178619 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.178623 | orchestrator | 2026-03-17 01:00:43.178626 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-17 01:00:43.178630 | orchestrator | Tuesday 17 March 2026 00:55:47 +0000 (0:00:01.598) 0:01:23.749 ********* 2026-03-17 01:00:43.178634 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.178638 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.178641 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.178645 | orchestrator | 2026-03-17 01:00:43.178649 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-17 01:00:43.178653 | orchestrator | Tuesday 17 March 2026 00:55:49 +0000 (0:00:02.236) 0:01:25.985 ********* 2026-03-17 01:00:43.178656 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.178660 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.178664 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.178668 | orchestrator | 2026-03-17 01:00:43.179190 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-17 01:00:43.179214 | orchestrator | Tuesday 17 March 2026 00:55:49 +0000 (0:00:00.296) 0:01:26.282 ********* 2026-03-17 01:00:43.179218 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.179222 | orchestrator | 2026-03-17 01:00:43.179226 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-17 01:00:43.179230 | orchestrator | Tuesday 17 March 2026 00:55:50 +0000 (0:00:00.672) 0:01:26.954 ********* 2026-03-17 01:00:43.179235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-17 01:00:43.179248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-17 01:00:43.179253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-17 01:00:43.179257 | orchestrator | 2026-03-17 01:00:43.179261 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-17 01:00:43.179266 | orchestrator | Tuesday 17 March 2026 00:55:53 +0000 (0:00:02.839) 0:01:29.794 ********* 2026-03-17 01:00:43.179272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-17 01:00:43.179276 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.179286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-17 01:00:43.179293 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.179297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-17 01:00:43.179301 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.179305 | orchestrator | 2026-03-17 01:00:43.179309 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-17 01:00:43.179312 | orchestrator | Tuesday 17 March 2026 00:55:54 +0000 (0:00:01.320) 0:01:31.114 ********* 2026-03-17 01:00:43.179318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 01:00:43.179323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 01:00:43.179328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 01:00:43.179333 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.179339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 01:00:43.179343 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.179346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 01:00:43.179357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-17 01:00:43.179362 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.179365 | orchestrator | 2026-03-17 01:00:43.179369 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-17 01:00:43.179373 | orchestrator | Tuesday 17 March 2026 00:55:56 +0000 (0:00:01.921) 0:01:33.035 ********* 2026-03-17 01:00:43.179377 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.179381 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.179385 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.179390 | orchestrator | 2026-03-17 01:00:43.179396 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-17 01:00:43.179402 | orchestrator | Tuesday 17 March 2026 00:55:56 +0000 (0:00:00.417) 0:01:33.453 ********* 2026-03-17 01:00:43.179408 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.179413 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.179420 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.179426 | orchestrator | 2026-03-17 01:00:43.179432 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-17 01:00:43.179438 | orchestrator | Tuesday 17 March 2026 00:55:58 +0000 (0:00:01.248) 0:01:34.701 ********* 2026-03-17 01:00:43.179443 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.179449 | orchestrator | 2026-03-17 01:00:43.179456 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-17 01:00:43.179462 | orchestrator | Tuesday 17 March 2026 00:55:58 +0000 (0:00:00.950) 0:01:35.652 ********* 2026-03-17 01:00:43.179468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.179476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.179516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.179531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179561 | orchestrator | 2026-03-17 01:00:43.179565 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-17 01:00:43.179569 | orchestrator | Tuesday 17 March 2026 00:56:03 +0000 (0:00:04.202) 0:01:39.855 ********* 2026-03-17 01:00:43.179575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.179584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179607 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.179614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.179621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179653 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.179660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.179667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179694 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.179701 | orchestrator | 2026-03-17 01:00:43.179707 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-17 01:00:43.179711 | orchestrator | Tuesday 17 March 2026 00:56:03 +0000 (0:00:00.685) 0:01:40.541 ********* 2026-03-17 01:00:43.179715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.179723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.179727 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.179731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.179735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.179739 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.179743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.179747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.179751 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.179754 | orchestrator | 2026-03-17 01:00:43.179758 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-17 01:00:43.179762 | orchestrator | Tuesday 17 March 2026 00:56:04 +0000 (0:00:00.939) 0:01:41.480 ********* 2026-03-17 01:00:43.179766 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.179770 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.179773 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.179777 | orchestrator | 2026-03-17 01:00:43.179781 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-17 01:00:43.179785 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:01.080) 0:01:42.561 ********* 2026-03-17 01:00:43.179788 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.179792 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.179796 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.179803 | orchestrator | 2026-03-17 01:00:43.179807 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-17 01:00:43.179812 | orchestrator | Tuesday 17 March 2026 00:56:08 +0000 (0:00:02.588) 0:01:45.150 ********* 2026-03-17 01:00:43.179816 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.179838 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.179843 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.179848 | orchestrator | 2026-03-17 01:00:43.179852 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-17 01:00:43.179857 | orchestrator | Tuesday 17 March 2026 00:56:08 +0000 (0:00:00.384) 0:01:45.534 ********* 2026-03-17 01:00:43.179861 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.179865 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.179870 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.179874 | orchestrator | 2026-03-17 01:00:43.179878 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-17 01:00:43.179883 | orchestrator | Tuesday 17 March 2026 00:56:09 +0000 (0:00:00.549) 0:01:46.083 ********* 2026-03-17 01:00:43.179887 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.179892 | orchestrator | 2026-03-17 01:00:43.179896 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-17 01:00:43.179900 | orchestrator | Tuesday 17 March 2026 00:56:10 +0000 (0:00:00.834) 0:01:46.918 ********* 2026-03-17 01:00:43.179907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.179915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:00:43.179920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.179934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:00:43.179944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.179963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:00:43.179976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.179999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180022 | orchestrator | 2026-03-17 01:00:43.180038 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-17 01:00:43.180043 | orchestrator | Tuesday 17 March 2026 00:56:14 +0000 (0:00:03.784) 0:01:50.702 ********* 2026-03-17 01:00:43.180050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.180054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:00:43.180058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180086 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.180090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.180094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:00:43.180098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180125 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.180129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.180133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:00:43.180139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.180166 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.180170 | orchestrator | 2026-03-17 01:00:43.180174 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-17 01:00:43.180178 | orchestrator | Tuesday 17 March 2026 00:56:15 +0000 (0:00:01.952) 0:01:52.654 ********* 2026-03-17 01:00:43.180182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.180187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.180192 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.180196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.180205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.180209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.180218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.180222 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.180226 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.180230 | orchestrator | 2026-03-17 01:00:43.180236 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-17 01:00:43.180240 | orchestrator | Tuesday 17 March 2026 00:56:17 +0000 (0:00:01.475) 0:01:54.130 ********* 2026-03-17 01:00:43.180244 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.180248 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.180251 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.180255 | orchestrator | 2026-03-17 01:00:43.180259 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-17 01:00:43.180263 | orchestrator | Tuesday 17 March 2026 00:56:18 +0000 (0:00:01.475) 0:01:55.605 ********* 2026-03-17 01:00:43.180266 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.180270 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.180274 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.180278 | orchestrator | 2026-03-17 01:00:43.180281 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-17 01:00:43.180285 | orchestrator | Tuesday 17 March 2026 00:56:21 +0000 (0:00:02.264) 0:01:57.869 ********* 2026-03-17 01:00:43.180289 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.180292 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.180296 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.180300 | orchestrator | 2026-03-17 01:00:43.180304 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-17 01:00:43.180307 | orchestrator | Tuesday 17 March 2026 00:56:21 +0000 (0:00:00.259) 0:01:58.129 ********* 2026-03-17 01:00:43.180311 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.180315 | orchestrator | 2026-03-17 01:00:43.180319 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-17 01:00:43.180322 | orchestrator | Tuesday 17 March 2026 00:56:22 +0000 (0:00:00.804) 0:01:58.933 ********* 2026-03-17 01:00:43.180327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:00:43.180340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.180345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:00:43.181025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.181055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:00:43.181061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.181068 | orchestrator | 2026-03-17 01:00:43.181095 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-17 01:00:43.181100 | orchestrator | Tuesday 17 March 2026 00:56:27 +0000 (0:00:05.405) 0:02:04.338 ********* 2026-03-17 01:00:43.181104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:00:43.181111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.181118 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.181142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:00:43.181149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.181160 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.181211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:00:43.181222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.181233 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.181238 | orchestrator | 2026-03-17 01:00:43.181244 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-17 01:00:43.181250 | orchestrator | Tuesday 17 March 2026 00:56:32 +0000 (0:00:05.085) 0:02:09.424 ********* 2026-03-17 01:00:43.181258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 01:00:43.181303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 01:00:43.181313 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.181320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 01:00:43.181326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 01:00:43.181332 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.181338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 01:00:43.181344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-17 01:00:43.181355 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.181360 | orchestrator | 2026-03-17 01:00:43.181366 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-17 01:00:43.181373 | orchestrator | Tuesday 17 March 2026 00:56:36 +0000 (0:00:03.867) 0:02:13.292 ********* 2026-03-17 01:00:43.181379 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.181385 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.181391 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.181396 | orchestrator | 2026-03-17 01:00:43.181400 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-17 01:00:43.181404 | orchestrator | Tuesday 17 March 2026 00:56:37 +0000 (0:00:01.168) 0:02:14.460 ********* 2026-03-17 01:00:43.181407 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.181411 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.181415 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.181419 | orchestrator | 2026-03-17 01:00:43.181422 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-17 01:00:43.181426 | orchestrator | Tuesday 17 March 2026 00:56:39 +0000 (0:00:01.861) 0:02:16.321 ********* 2026-03-17 01:00:43.181430 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.181434 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.181437 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.181441 | orchestrator | 2026-03-17 01:00:43.181445 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-17 01:00:43.181451 | orchestrator | Tuesday 17 March 2026 00:56:39 +0000 (0:00:00.255) 0:02:16.577 ********* 2026-03-17 01:00:43.181455 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.181459 | orchestrator | 2026-03-17 01:00:43.181463 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-17 01:00:43.181467 | orchestrator | Tuesday 17 March 2026 00:56:40 +0000 (0:00:01.041) 0:02:17.619 ********* 2026-03-17 01:00:43.181505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.181511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.181516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.181523 | orchestrator | 2026-03-17 01:00:43.181527 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-17 01:00:43.181531 | orchestrator | Tuesday 17 March 2026 00:56:44 +0000 (0:00:03.166) 0:02:20.786 ********* 2026-03-17 01:00:43.181535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.181539 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.181544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.181548 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.181584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.181592 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.181598 | orchestrator | 2026-03-17 01:00:43.181604 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-17 01:00:43.181610 | orchestrator | Tuesday 17 March 2026 00:56:44 +0000 (0:00:00.330) 0:02:21.116 ********* 2026-03-17 01:00:43.181617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.181622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.181633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.181637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.181641 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.181644 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.181648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.181652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.181656 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.181660 | orchestrator | 2026-03-17 01:00:43.181664 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-17 01:00:43.181668 | orchestrator | Tuesday 17 March 2026 00:56:45 +0000 (0:00:00.732) 0:02:21.849 ********* 2026-03-17 01:00:43.181672 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.181675 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.181679 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.181683 | orchestrator | 2026-03-17 01:00:43.181687 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-17 01:00:43.181691 | orchestrator | Tuesday 17 March 2026 00:56:46 +0000 (0:00:01.385) 0:02:23.234 ********* 2026-03-17 01:00:43.181694 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.181698 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.181702 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.181706 | orchestrator | 2026-03-17 01:00:43.181710 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-17 01:00:43.181713 | orchestrator | Tuesday 17 March 2026 00:56:48 +0000 (0:00:02.052) 0:02:25.286 ********* 2026-03-17 01:00:43.181717 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.181721 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.181725 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.181729 | orchestrator | 2026-03-17 01:00:43.181732 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-17 01:00:43.181736 | orchestrator | Tuesday 17 March 2026 00:56:48 +0000 (0:00:00.323) 0:02:25.609 ********* 2026-03-17 01:00:43.181740 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.181744 | orchestrator | 2026-03-17 01:00:43.181748 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-17 01:00:43.181751 | orchestrator | Tuesday 17 March 2026 00:56:49 +0000 (0:00:00.992) 0:02:26.602 ********* 2026-03-17 01:00:43.181789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:00:43.181802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:00:43.181859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:00:43.181869 | orchestrator | 2026-03-17 01:00:43.181873 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-17 01:00:43.181877 | orchestrator | Tuesday 17 March 2026 00:56:52 +0000 (0:00:03.081) 0:02:29.684 ********* 2026-03-17 01:00:43.181913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:00:43.181928 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.181937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:00:43.181944 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.181993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:00:43.182006 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.182012 | orchestrator | 2026-03-17 01:00:43.182038 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-17 01:00:43.182044 | orchestrator | Tuesday 17 March 2026 00:56:53 +0000 (0:00:00.837) 0:02:30.521 ********* 2026-03-17 01:00:43.182051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-17 01:00:43.182059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 01:00:43.182067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-17 01:00:43.182073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 01:00:43.182079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-17 01:00:43.182086 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.182093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-17 01:00:43.182100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 01:00:43.182106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-17 01:00:43.182118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 01:00:43.182124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-17 01:00:43.182130 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.182177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-17 01:00:43.182185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 01:00:43.182191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-17 01:00:43.182198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-17 01:00:43.182205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-17 01:00:43.182210 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.182216 | orchestrator | 2026-03-17 01:00:43.182222 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-17 01:00:43.182228 | orchestrator | Tuesday 17 March 2026 00:56:55 +0000 (0:00:01.775) 0:02:32.297 ********* 2026-03-17 01:00:43.182234 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.182240 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.182246 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.182252 | orchestrator | 2026-03-17 01:00:43.182258 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-17 01:00:43.182264 | orchestrator | Tuesday 17 March 2026 00:56:56 +0000 (0:00:01.310) 0:02:33.608 ********* 2026-03-17 01:00:43.182270 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.182276 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.182282 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.182288 | orchestrator | 2026-03-17 01:00:43.182294 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-17 01:00:43.182299 | orchestrator | Tuesday 17 March 2026 00:56:58 +0000 (0:00:02.042) 0:02:35.650 ********* 2026-03-17 01:00:43.182306 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.182311 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.182317 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.182323 | orchestrator | 2026-03-17 01:00:43.182328 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-17 01:00:43.182334 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.340) 0:02:35.990 ********* 2026-03-17 01:00:43.182340 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.182350 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.182356 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.182362 | orchestrator | 2026-03-17 01:00:43.182367 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-17 01:00:43.182373 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.305) 0:02:36.296 ********* 2026-03-17 01:00:43.182378 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.182384 | orchestrator | 2026-03-17 01:00:43.182390 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-17 01:00:43.182396 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:01.100) 0:02:37.396 ********* 2026-03-17 01:00:43.182406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:00:43.182467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:00:43.182477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:00:43.182482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:00:43.182493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:00:43.182497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:00:43.182525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:00:43.182531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:00:43.182535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:00:43.182539 | orchestrator | 2026-03-17 01:00:43.182543 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-17 01:00:43.182547 | orchestrator | Tuesday 17 March 2026 00:57:05 +0000 (0:00:05.219) 0:02:42.615 ********* 2026-03-17 01:00:43.182551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:00:43.182563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:00:43.182600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:00:43.182606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:00:43.182610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:00:43.182614 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.182621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:00:43.182625 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.182629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:00:43.182636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:00:43.182667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:00:43.182673 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.182677 | orchestrator | 2026-03-17 01:00:43.182680 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-17 01:00:43.182684 | orchestrator | Tuesday 17 March 2026 00:57:06 +0000 (0:00:00.791) 0:02:43.406 ********* 2026-03-17 01:00:43.182688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-17 01:00:43.182693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-17 01:00:43.182698 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.182704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-17 01:00:43.182708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-17 01:00:43.182712 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.182716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-17 01:00:43.182720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-17 01:00:43.182724 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.182728 | orchestrator | 2026-03-17 01:00:43.182732 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-17 01:00:43.182735 | orchestrator | Tuesday 17 March 2026 00:57:07 +0000 (0:00:01.254) 0:02:44.661 ********* 2026-03-17 01:00:43.182739 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.182743 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.182747 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.182751 | orchestrator | 2026-03-17 01:00:43.182754 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-17 01:00:43.182758 | orchestrator | Tuesday 17 March 2026 00:57:09 +0000 (0:00:01.293) 0:02:45.955 ********* 2026-03-17 01:00:43.182762 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.182766 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.182769 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.182773 | orchestrator | 2026-03-17 01:00:43.182777 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-17 01:00:43.182781 | orchestrator | Tuesday 17 March 2026 00:57:11 +0000 (0:00:01.976) 0:02:47.931 ********* 2026-03-17 01:00:43.182785 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.182788 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.182792 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.182796 | orchestrator | 2026-03-17 01:00:43.182800 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-17 01:00:43.182806 | orchestrator | Tuesday 17 March 2026 00:57:11 +0000 (0:00:00.298) 0:02:48.230 ********* 2026-03-17 01:00:43.182810 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.182814 | orchestrator | 2026-03-17 01:00:43.182818 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-17 01:00:43.182836 | orchestrator | Tuesday 17 March 2026 00:57:12 +0000 (0:00:01.094) 0:02:49.325 ********* 2026-03-17 01:00:43.182877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.182890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.182898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.182905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.182915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.182957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.182968 | orchestrator | 2026-03-17 01:00:43.182972 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-17 01:00:43.182976 | orchestrator | Tuesday 17 March 2026 00:57:16 +0000 (0:00:04.270) 0:02:53.595 ********* 2026-03-17 01:00:43.182980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.182984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.182989 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.182995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.183027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183037 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.183041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.183045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183049 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.183053 | orchestrator | 2026-03-17 01:00:43.183057 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-17 01:00:43.183061 | orchestrator | Tuesday 17 March 2026 00:57:17 +0000 (0:00:00.510) 0:02:54.106 ********* 2026-03-17 01:00:43.183065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183074 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.183078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183086 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.183090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183115 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.183121 | orchestrator | 2026-03-17 01:00:43.183127 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-17 01:00:43.183134 | orchestrator | Tuesday 17 March 2026 00:57:18 +0000 (0:00:00.969) 0:02:55.075 ********* 2026-03-17 01:00:43.183178 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.183186 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.183192 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.183199 | orchestrator | 2026-03-17 01:00:43.183205 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-17 01:00:43.183212 | orchestrator | Tuesday 17 March 2026 00:57:19 +0000 (0:00:01.018) 0:02:56.094 ********* 2026-03-17 01:00:43.183218 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.183224 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.183230 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.183236 | orchestrator | 2026-03-17 01:00:43.183243 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-17 01:00:43.183247 | orchestrator | Tuesday 17 March 2026 00:57:21 +0000 (0:00:01.690) 0:02:57.785 ********* 2026-03-17 01:00:43.183251 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.183255 | orchestrator | 2026-03-17 01:00:43.183259 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-17 01:00:43.183263 | orchestrator | Tuesday 17 March 2026 00:57:22 +0000 (0:00:01.109) 0:02:58.895 ********* 2026-03-17 01:00:43.183267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.183272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.183334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.183339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183391 | orchestrator | 2026-03-17 01:00:43.183395 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-17 01:00:43.183399 | orchestrator | Tuesday 17 March 2026 00:57:27 +0000 (0:00:04.920) 0:03:03.815 ********* 2026-03-17 01:00:43.183403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.183407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183424 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.183457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.183463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183478 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.183486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.183548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.183570 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.183577 | orchestrator | 2026-03-17 01:00:43.183583 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-17 01:00:43.183589 | orchestrator | Tuesday 17 March 2026 00:57:28 +0000 (0:00:00.969) 0:03:04.784 ********* 2026-03-17 01:00:43.183595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183619 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.183626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183632 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.183638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.183650 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.183656 | orchestrator | 2026-03-17 01:00:43.183662 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-17 01:00:43.183671 | orchestrator | Tuesday 17 March 2026 00:57:28 +0000 (0:00:00.904) 0:03:05.689 ********* 2026-03-17 01:00:43.183678 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.183684 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.183690 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.183696 | orchestrator | 2026-03-17 01:00:43.183702 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-17 01:00:43.183708 | orchestrator | Tuesday 17 March 2026 00:57:30 +0000 (0:00:01.149) 0:03:06.838 ********* 2026-03-17 01:00:43.183714 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.183718 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.183722 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.183726 | orchestrator | 2026-03-17 01:00:43.183730 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-17 01:00:43.183734 | orchestrator | Tuesday 17 March 2026 00:57:32 +0000 (0:00:01.886) 0:03:08.725 ********* 2026-03-17 01:00:43.183781 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.183787 | orchestrator | 2026-03-17 01:00:43.183790 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-17 01:00:43.183795 | orchestrator | Tuesday 17 March 2026 00:57:32 +0000 (0:00:00.961) 0:03:09.687 ********* 2026-03-17 01:00:43.183799 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 01:00:43.183802 | orchestrator | 2026-03-17 01:00:43.183806 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-17 01:00:43.183810 | orchestrator | Tuesday 17 March 2026 00:57:35 +0000 (0:00:02.969) 0:03:12.657 ********* 2026-03-17 01:00:43.183815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:00:43.183861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 01:00:43.183867 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.183909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:00:43.183916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 01:00:43.183924 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.183929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:00:43.183935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 01:00:43.183939 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.183943 | orchestrator | 2026-03-17 01:00:43.183947 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-17 01:00:43.183951 | orchestrator | Tuesday 17 March 2026 00:57:38 +0000 (0:00:02.304) 0:03:14.961 ********* 2026-03-17 01:00:43.183986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:00:43.183995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 01:00:43.184000 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.184007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:00:43.184038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 01:00:43.184044 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.184051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:00:43.184055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-17 01:00:43.184060 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.184063 | orchestrator | 2026-03-17 01:00:43.184067 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-17 01:00:43.184071 | orchestrator | Tuesday 17 March 2026 00:57:40 +0000 (0:00:02.475) 0:03:17.436 ********* 2026-03-17 01:00:43.184080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 01:00:43.184114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 01:00:43.184120 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.184128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 01:00:43.184132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 01:00:43.184136 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.184140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 01:00:43.184144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-17 01:00:43.184149 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.184154 | orchestrator | 2026-03-17 01:00:43.184161 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-17 01:00:43.184168 | orchestrator | Tuesday 17 March 2026 00:57:43 +0000 (0:00:02.844) 0:03:20.281 ********* 2026-03-17 01:00:43.184177 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.184183 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.184189 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.184195 | orchestrator | 2026-03-17 01:00:43.184201 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-17 01:00:43.184207 | orchestrator | Tuesday 17 March 2026 00:57:46 +0000 (0:00:02.461) 0:03:22.742 ********* 2026-03-17 01:00:43.184213 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.184219 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.184224 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.184230 | orchestrator | 2026-03-17 01:00:43.184240 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-17 01:00:43.184247 | orchestrator | Tuesday 17 March 2026 00:57:47 +0000 (0:00:01.139) 0:03:23.881 ********* 2026-03-17 01:00:43.184253 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.184259 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.184266 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.184272 | orchestrator | 2026-03-17 01:00:43.184279 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-17 01:00:43.184290 | orchestrator | Tuesday 17 March 2026 00:57:47 +0000 (0:00:00.419) 0:03:24.301 ********* 2026-03-17 01:00:43.184297 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.184303 | orchestrator | 2026-03-17 01:00:43.184310 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-17 01:00:43.184316 | orchestrator | Tuesday 17 March 2026 00:57:48 +0000 (0:00:00.959) 0:03:25.260 ********* 2026-03-17 01:00:43.184378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 01:00:43.184389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 01:00:43.184396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-17 01:00:43.184402 | orchestrator | 2026-03-17 01:00:43.184408 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-17 01:00:43.184415 | orchestrator | Tuesday 17 March 2026 00:57:50 +0000 (0:00:01.612) 0:03:26.873 ********* 2026-03-17 01:00:43.184421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 01:00:43.184428 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.184438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 01:00:43.184450 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.184496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-17 01:00:43.184502 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.184506 | orchestrator | 2026-03-17 01:00:43.184510 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-17 01:00:43.184514 | orchestrator | Tuesday 17 March 2026 00:57:50 +0000 (0:00:00.391) 0:03:27.265 ********* 2026-03-17 01:00:43.184518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-17 01:00:43.184523 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.184527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-17 01:00:43.184531 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.184535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-17 01:00:43.184539 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.184543 | orchestrator | 2026-03-17 01:00:43.184547 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-17 01:00:43.184551 | orchestrator | Tuesday 17 March 2026 00:57:51 +0000 (0:00:00.552) 0:03:27.817 ********* 2026-03-17 01:00:43.184556 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.184562 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.184569 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.184574 | orchestrator | 2026-03-17 01:00:43.184580 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-17 01:00:43.184586 | orchestrator | Tuesday 17 March 2026 00:57:51 +0000 (0:00:00.373) 0:03:28.191 ********* 2026-03-17 01:00:43.184593 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.184599 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.184605 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.184610 | orchestrator | 2026-03-17 01:00:43.184616 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-17 01:00:43.184622 | orchestrator | Tuesday 17 March 2026 00:57:52 +0000 (0:00:01.113) 0:03:29.305 ********* 2026-03-17 01:00:43.184633 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.184640 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.184646 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.184651 | orchestrator | 2026-03-17 01:00:43.184658 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-17 01:00:43.184665 | orchestrator | Tuesday 17 March 2026 00:57:53 +0000 (0:00:00.497) 0:03:29.802 ********* 2026-03-17 01:00:43.184671 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.184677 | orchestrator | 2026-03-17 01:00:43.184683 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-17 01:00:43.184689 | orchestrator | Tuesday 17 March 2026 00:57:54 +0000 (0:00:01.094) 0:03:30.897 ********* 2026-03-17 01:00:43.184701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.184759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.184766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-17 01:00:43.184770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-17 01:00:43.184779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.184788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.184841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.184848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.184853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-17 01:00:43.184857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.184865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.184872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.184906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-17 01:00:43.184912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-17 01:00:43.184916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-17 01:00:43.184924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.184931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.184960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.184966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.184971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.184982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.184989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-17 01:00:43.185067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-17 01:00:43.185079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-17 01:00:43.185141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-17 01:00:43.185171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-17 01:00:43.185244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.185253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-17 01:00:43.185300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.185318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185322 | orchestrator | 2026-03-17 01:00:43.185326 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-17 01:00:43.185330 | orchestrator | Tuesday 17 March 2026 00:57:59 +0000 (0:00:04.814) 0:03:35.711 ********* 2026-03-17 01:00:43.185337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.185368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-17 01:00:43.185384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-17 01:00:43.185388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-17 01:00:43.185438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-17 01:00:43.185455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.185513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.185541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-17 01:00:43.185547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-17 01:00:43.185604 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.185610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.185623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-17 01:00:43.185702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-17 01:00:43.185707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-17 01:00:43.185717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-17 01:00:43.185802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-17 01:00:43.185870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.185890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185901 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.185908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-17 01:00:43.185943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-17 01:00:43.185948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.185953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-17 01:00:43.185957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:00:43.185961 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.185965 | orchestrator | 2026-03-17 01:00:43.185970 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-17 01:00:43.185974 | orchestrator | Tuesday 17 March 2026 00:58:00 +0000 (0:00:01.362) 0:03:37.074 ********* 2026-03-17 01:00:43.185978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.185983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.185987 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.186067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186089 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.186093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186137 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.186141 | orchestrator | 2026-03-17 01:00:43.186145 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-17 01:00:43.186149 | orchestrator | Tuesday 17 March 2026 00:58:02 +0000 (0:00:02.032) 0:03:39.107 ********* 2026-03-17 01:00:43.186153 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.186157 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.186161 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.186165 | orchestrator | 2026-03-17 01:00:43.186168 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-17 01:00:43.186172 | orchestrator | Tuesday 17 March 2026 00:58:03 +0000 (0:00:01.333) 0:03:40.440 ********* 2026-03-17 01:00:43.186176 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.186180 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.186184 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.186188 | orchestrator | 2026-03-17 01:00:43.186192 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-17 01:00:43.186196 | orchestrator | Tuesday 17 March 2026 00:58:05 +0000 (0:00:01.724) 0:03:42.165 ********* 2026-03-17 01:00:43.186200 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.186204 | orchestrator | 2026-03-17 01:00:43.186208 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-17 01:00:43.186212 | orchestrator | Tuesday 17 March 2026 00:58:06 +0000 (0:00:01.411) 0:03:43.576 ********* 2026-03-17 01:00:43.186216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:00:43.186222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:00:43.186260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:00:43.186267 | orchestrator | 2026-03-17 01:00:43.186271 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-17 01:00:43.186275 | orchestrator | Tuesday 17 March 2026 00:58:10 +0000 (0:00:03.945) 0:03:47.521 ********* 2026-03-17 01:00:43.186279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:00:43.186283 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.186287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:00:43.186295 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.186301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:00:43.186306 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.186309 | orchestrator | 2026-03-17 01:00:43.186313 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-17 01:00:43.186317 | orchestrator | Tuesday 17 March 2026 00:58:11 +0000 (0:00:00.777) 0:03:48.299 ********* 2026-03-17 01:00:43.186321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.186354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.186361 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.186365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.186369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.186373 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.186377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.186381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.186386 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.186389 | orchestrator | 2026-03-17 01:00:43.186393 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-17 01:00:43.186397 | orchestrator | Tuesday 17 March 2026 00:58:12 +0000 (0:00:00.755) 0:03:49.055 ********* 2026-03-17 01:00:43.186401 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.186405 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.186409 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.186417 | orchestrator | 2026-03-17 01:00:43.186421 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-17 01:00:43.186425 | orchestrator | Tuesday 17 March 2026 00:58:13 +0000 (0:00:01.406) 0:03:50.461 ********* 2026-03-17 01:00:43.186429 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.186433 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.186436 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.186440 | orchestrator | 2026-03-17 01:00:43.186444 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-17 01:00:43.186448 | orchestrator | Tuesday 17 March 2026 00:58:15 +0000 (0:00:01.952) 0:03:52.414 ********* 2026-03-17 01:00:43.186452 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.186456 | orchestrator | 2026-03-17 01:00:43.186460 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-17 01:00:43.186463 | orchestrator | Tuesday 17 March 2026 00:58:17 +0000 (0:00:01.464) 0:03:53.878 ********* 2026-03-17 01:00:43.186470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.186503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.186510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.186517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.186522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.186568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.186587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186608 | orchestrator | 2026-03-17 01:00:43.186613 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-17 01:00:43.186617 | orchestrator | Tuesday 17 March 2026 00:58:23 +0000 (0:00:06.276) 0:04:00.154 ********* 2026-03-17 01:00:43.186621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.186629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.186633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.186652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.186657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186677 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.186681 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.186689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.186706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.186714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.186722 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.186726 | orchestrator | 2026-03-17 01:00:43.186730 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-17 01:00:43.186734 | orchestrator | Tuesday 17 March 2026 00:58:24 +0000 (0:00:00.651) 0:04:00.806 ********* 2026-03-17 01:00:43.186738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186754 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.186760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186794 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.186800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.186861 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.186867 | orchestrator | 2026-03-17 01:00:43.186874 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-17 01:00:43.186879 | orchestrator | Tuesday 17 March 2026 00:58:25 +0000 (0:00:01.350) 0:04:02.156 ********* 2026-03-17 01:00:43.186885 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.186892 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.186898 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.186903 | orchestrator | 2026-03-17 01:00:43.186909 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-17 01:00:43.186915 | orchestrator | Tuesday 17 March 2026 00:58:26 +0000 (0:00:01.040) 0:04:03.196 ********* 2026-03-17 01:00:43.186922 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.186929 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.186936 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.186942 | orchestrator | 2026-03-17 01:00:43.186949 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-17 01:00:43.186955 | orchestrator | Tuesday 17 March 2026 00:58:28 +0000 (0:00:01.857) 0:04:05.054 ********* 2026-03-17 01:00:43.186961 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.186968 | orchestrator | 2026-03-17 01:00:43.186973 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-17 01:00:43.186977 | orchestrator | Tuesday 17 March 2026 00:58:31 +0000 (0:00:02.736) 0:04:07.791 ********* 2026-03-17 01:00:43.186981 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-17 01:00:43.186986 | orchestrator | 2026-03-17 01:00:43.186990 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-17 01:00:43.186994 | orchestrator | Tuesday 17 March 2026 00:58:32 +0000 (0:00:01.413) 0:04:09.205 ********* 2026-03-17 01:00:43.186998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-17 01:00:43.187008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-17 01:00:43.187052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-17 01:00:43.187061 | orchestrator | 2026-03-17 01:00:43.187068 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-17 01:00:43.187075 | orchestrator | Tuesday 17 March 2026 00:58:37 +0000 (0:00:04.943) 0:04:14.149 ********* 2026-03-17 01:00:43.187081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 01:00:43.187088 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.187093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 01:00:43.187097 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.187103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 01:00:43.187110 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.187116 | orchestrator | 2026-03-17 01:00:43.187122 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-17 01:00:43.187128 | orchestrator | Tuesday 17 March 2026 00:58:38 +0000 (0:00:01.148) 0:04:15.298 ********* 2026-03-17 01:00:43.187135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 01:00:43.187142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 01:00:43.187148 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.187155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 01:00:43.187172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 01:00:43.187177 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.187181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 01:00:43.187188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-17 01:00:43.187192 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.187196 | orchestrator | 2026-03-17 01:00:43.187200 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-17 01:00:43.187204 | orchestrator | Tuesday 17 March 2026 00:58:39 +0000 (0:00:01.286) 0:04:16.585 ********* 2026-03-17 01:00:43.187209 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.187214 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.187218 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.187223 | orchestrator | 2026-03-17 01:00:43.187227 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-17 01:00:43.187251 | orchestrator | Tuesday 17 March 2026 00:58:41 +0000 (0:00:01.957) 0:04:18.543 ********* 2026-03-17 01:00:43.187256 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.187261 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.187265 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.187269 | orchestrator | 2026-03-17 01:00:43.187274 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-17 01:00:43.187279 | orchestrator | Tuesday 17 March 2026 00:58:44 +0000 (0:00:02.645) 0:04:21.189 ********* 2026-03-17 01:00:43.187284 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-17 01:00:43.187289 | orchestrator | 2026-03-17 01:00:43.187293 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-17 01:00:43.187300 | orchestrator | Tuesday 17 March 2026 00:58:45 +0000 (0:00:00.823) 0:04:22.012 ********* 2026-03-17 01:00:43.187307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 01:00:43.187313 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.187319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 01:00:43.187326 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.187333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 01:00:43.187345 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.187353 | orchestrator | 2026-03-17 01:00:43.187360 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-17 01:00:43.187368 | orchestrator | Tuesday 17 March 2026 00:58:46 +0000 (0:00:01.597) 0:04:23.610 ********* 2026-03-17 01:00:43.187375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 01:00:43.187382 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.187393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 01:00:43.187399 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.187421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-17 01:00:43.187427 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.187431 | orchestrator | 2026-03-17 01:00:43.187436 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-17 01:00:43.187440 | orchestrator | Tuesday 17 March 2026 00:58:48 +0000 (0:00:01.197) 0:04:24.808 ********* 2026-03-17 01:00:43.187445 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.187449 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.187453 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.187458 | orchestrator | 2026-03-17 01:00:43.187462 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-17 01:00:43.187467 | orchestrator | Tuesday 17 March 2026 00:58:49 +0000 (0:00:01.804) 0:04:26.612 ********* 2026-03-17 01:00:43.187471 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.187476 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.187481 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.187485 | orchestrator | 2026-03-17 01:00:43.187490 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-17 01:00:43.187494 | orchestrator | Tuesday 17 March 2026 00:58:52 +0000 (0:00:02.267) 0:04:28.880 ********* 2026-03-17 01:00:43.187498 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.187503 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.187507 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.187511 | orchestrator | 2026-03-17 01:00:43.187516 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-17 01:00:43.187520 | orchestrator | Tuesday 17 March 2026 00:58:54 +0000 (0:00:02.575) 0:04:31.455 ********* 2026-03-17 01:00:43.187528 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-17 01:00:43.187533 | orchestrator | 2026-03-17 01:00:43.187537 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-17 01:00:43.187541 | orchestrator | Tuesday 17 March 2026 00:58:56 +0000 (0:00:01.363) 0:04:32.819 ********* 2026-03-17 01:00:43.187546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 01:00:43.187551 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.187555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 01:00:43.187560 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.187564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 01:00:43.187569 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.187573 | orchestrator | 2026-03-17 01:00:43.187578 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-17 01:00:43.187582 | orchestrator | Tuesday 17 March 2026 00:58:57 +0000 (0:00:01.162) 0:04:33.981 ********* 2026-03-17 01:00:43.187589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 01:00:43.187594 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.187612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 01:00:43.187617 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.187621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-17 01:00:43.187630 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.187634 | orchestrator | 2026-03-17 01:00:43.187638 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-17 01:00:43.187641 | orchestrator | Tuesday 17 March 2026 00:58:58 +0000 (0:00:01.092) 0:04:35.074 ********* 2026-03-17 01:00:43.187645 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.187649 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.187653 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.187656 | orchestrator | 2026-03-17 01:00:43.187660 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-17 01:00:43.187664 | orchestrator | Tuesday 17 March 2026 00:58:59 +0000 (0:00:01.530) 0:04:36.605 ********* 2026-03-17 01:00:43.187668 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.187672 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.187676 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.187679 | orchestrator | 2026-03-17 01:00:43.187683 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-17 01:00:43.187687 | orchestrator | Tuesday 17 March 2026 00:59:01 +0000 (0:00:02.038) 0:04:38.644 ********* 2026-03-17 01:00:43.187691 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.187695 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.187699 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.187703 | orchestrator | 2026-03-17 01:00:43.187706 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-17 01:00:43.187710 | orchestrator | Tuesday 17 March 2026 00:59:04 +0000 (0:00:02.588) 0:04:41.232 ********* 2026-03-17 01:00:43.187714 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.187718 | orchestrator | 2026-03-17 01:00:43.187722 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-17 01:00:43.187725 | orchestrator | Tuesday 17 March 2026 00:59:05 +0000 (0:00:01.342) 0:04:42.575 ********* 2026-03-17 01:00:43.187730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:00:43.187737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:00:43.187757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:00:43.187763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:00:43.187767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:00:43.187811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.187815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.187819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:00:43.187853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.187873 | orchestrator | 2026-03-17 01:00:43.187877 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-17 01:00:43.187894 | orchestrator | Tuesday 17 March 2026 00:59:08 +0000 (0:00:02.861) 0:04:45.437 ********* 2026-03-17 01:00:43.187899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:00:43.187903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:00:43.187907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.187925 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.187941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:00:43.187946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:00:43.187950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.187964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.187968 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.187975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:00:43.187995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:00:43.188000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.188004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:00:43.188008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:00:43.188012 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.188016 | orchestrator | 2026-03-17 01:00:43.188020 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-17 01:00:43.188024 | orchestrator | Tuesday 17 March 2026 00:59:09 +0000 (0:00:00.617) 0:04:46.054 ********* 2026-03-17 01:00:43.188028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 01:00:43.188033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 01:00:43.188037 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.188044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 01:00:43.188048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 01:00:43.188052 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.188056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 01:00:43.188062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-17 01:00:43.188066 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.188070 | orchestrator | 2026-03-17 01:00:43.188074 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-17 01:00:43.188078 | orchestrator | Tuesday 17 March 2026 00:59:10 +0000 (0:00:00.782) 0:04:46.837 ********* 2026-03-17 01:00:43.188081 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.188085 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.188091 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.188097 | orchestrator | 2026-03-17 01:00:43.188102 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-17 01:00:43.188111 | orchestrator | Tuesday 17 March 2026 00:59:11 +0000 (0:00:01.460) 0:04:48.297 ********* 2026-03-17 01:00:43.188120 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.188145 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.188152 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.188158 | orchestrator | 2026-03-17 01:00:43.188164 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-17 01:00:43.188169 | orchestrator | Tuesday 17 March 2026 00:59:13 +0000 (0:00:01.952) 0:04:50.250 ********* 2026-03-17 01:00:43.188175 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.188181 | orchestrator | 2026-03-17 01:00:43.188188 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-17 01:00:43.188193 | orchestrator | Tuesday 17 March 2026 00:59:14 +0000 (0:00:01.298) 0:04:51.549 ********* 2026-03-17 01:00:43.188200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.188208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.188221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.188249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:00:43.188257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:00:43.188261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:00:43.188269 | orchestrator | 2026-03-17 01:00:43.188273 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-17 01:00:43.188277 | orchestrator | Tuesday 17 March 2026 00:59:19 +0000 (0:00:04.916) 0:04:56.465 ********* 2026-03-17 01:00:43.188284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.188300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:00:43.188305 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.188309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.188317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:00:43.188321 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.188329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.188347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:00:43.188352 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.188356 | orchestrator | 2026-03-17 01:00:43.188360 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-17 01:00:43.188363 | orchestrator | Tuesday 17 March 2026 00:59:20 +0000 (0:00:01.043) 0:04:57.508 ********* 2026-03-17 01:00:43.188367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.188372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-17 01:00:43.188380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-17 01:00:43.188384 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.188388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.188392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-17 01:00:43.188396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-17 01:00:43.188400 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.188404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.188408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-17 01:00:43.188415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-17 01:00:43.188419 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.188422 | orchestrator | 2026-03-17 01:00:43.188426 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-17 01:00:43.188430 | orchestrator | Tuesday 17 March 2026 00:59:21 +0000 (0:00:00.893) 0:04:58.402 ********* 2026-03-17 01:00:43.188434 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.188438 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.188442 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.188445 | orchestrator | 2026-03-17 01:00:43.188449 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-17 01:00:43.188453 | orchestrator | Tuesday 17 March 2026 00:59:22 +0000 (0:00:00.430) 0:04:58.832 ********* 2026-03-17 01:00:43.188457 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.188472 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.188477 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.188480 | orchestrator | 2026-03-17 01:00:43.188484 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-17 01:00:43.188488 | orchestrator | Tuesday 17 March 2026 00:59:23 +0000 (0:00:01.359) 0:05:00.192 ********* 2026-03-17 01:00:43.188492 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.188496 | orchestrator | 2026-03-17 01:00:43.188500 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-17 01:00:43.188504 | orchestrator | Tuesday 17 March 2026 00:59:25 +0000 (0:00:01.568) 0:05:01.760 ********* 2026-03-17 01:00:43.188512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-17 01:00:43.188516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:00:43.188523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-17 01:00:43.188556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.188574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:00:43.188581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-17 01:00:43.188596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.188614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:00:43.188622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.188635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.188642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-17 01:00:43.188658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.188672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-17 01:00:43.188685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.188694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:00:43.188742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.188749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-17 01:00:43.188755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.188781 | orchestrator | 2026-03-17 01:00:43.188805 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-17 01:00:43.188812 | orchestrator | Tuesday 17 March 2026 00:59:28 +0000 (0:00:03.840) 0:05:05.601 ********* 2026-03-17 01:00:43.188818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-17 01:00:43.188868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:00:43.188875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.188920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.188936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-17 01:00:43.188944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.188956 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.188963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-17 01:00:43.188975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:00:43.188982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.188997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.189007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.189017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-17 01:00:43.189035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.189042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.189047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.189053 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-17 01:00:43.189065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:00:43.189076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.189083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.189094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.189129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:00:43.189137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-17 01:00:43.189143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.189155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:00:43.189163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:00:43.189170 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189176 | orchestrator | 2026-03-17 01:00:43.189183 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-17 01:00:43.189191 | orchestrator | Tuesday 17 March 2026 00:59:30 +0000 (0:00:01.201) 0:05:06.803 ********* 2026-03-17 01:00:43.189199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-17 01:00:43.189204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-17 01:00:43.189210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.189214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.189218 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-17 01:00:43.189226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-17 01:00:43.189230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.189238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-17 01:00:43.189242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.189246 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-17 01:00:43.189257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.189264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-17 01:00:43.189268 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189272 | orchestrator | 2026-03-17 01:00:43.189276 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-17 01:00:43.189280 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.923) 0:05:07.726 ********* 2026-03-17 01:00:43.189284 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189287 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189291 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189295 | orchestrator | 2026-03-17 01:00:43.189299 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-17 01:00:43.189303 | orchestrator | Tuesday 17 March 2026 00:59:31 +0000 (0:00:00.362) 0:05:08.089 ********* 2026-03-17 01:00:43.189307 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189310 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189314 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189318 | orchestrator | 2026-03-17 01:00:43.189322 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-17 01:00:43.189326 | orchestrator | Tuesday 17 March 2026 00:59:32 +0000 (0:00:01.113) 0:05:09.202 ********* 2026-03-17 01:00:43.189329 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.189333 | orchestrator | 2026-03-17 01:00:43.189337 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-17 01:00:43.189341 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:01.445) 0:05:10.647 ********* 2026-03-17 01:00:43.189345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 01:00:43.189353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 01:00:43.189363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-17 01:00:43.189369 | orchestrator | 2026-03-17 01:00:43.189375 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-17 01:00:43.189380 | orchestrator | Tuesday 17 March 2026 00:59:36 +0000 (0:00:02.380) 0:05:13.028 ********* 2026-03-17 01:00:43.189386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 01:00:43.189393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 01:00:43.189403 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189409 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-17 01:00:43.189419 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189423 | orchestrator | 2026-03-17 01:00:43.189426 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-17 01:00:43.189433 | orchestrator | Tuesday 17 March 2026 00:59:36 +0000 (0:00:00.346) 0:05:13.375 ********* 2026-03-17 01:00:43.189437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-17 01:00:43.189442 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-17 01:00:43.189450 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-17 01:00:43.189458 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189462 | orchestrator | 2026-03-17 01:00:43.189468 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-17 01:00:43.189472 | orchestrator | Tuesday 17 March 2026 00:59:37 +0000 (0:00:00.743) 0:05:14.118 ********* 2026-03-17 01:00:43.189475 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189479 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189483 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189487 | orchestrator | 2026-03-17 01:00:43.189491 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-17 01:00:43.189494 | orchestrator | Tuesday 17 March 2026 00:59:37 +0000 (0:00:00.422) 0:05:14.541 ********* 2026-03-17 01:00:43.189498 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189502 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189506 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189510 | orchestrator | 2026-03-17 01:00:43.189514 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-17 01:00:43.189524 | orchestrator | Tuesday 17 March 2026 00:59:39 +0000 (0:00:01.291) 0:05:15.832 ********* 2026-03-17 01:00:43.189528 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.189531 | orchestrator | 2026-03-17 01:00:43.189535 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-17 01:00:43.189539 | orchestrator | Tuesday 17 March 2026 00:59:40 +0000 (0:00:01.705) 0:05:17.537 ********* 2026-03-17 01:00:43.189543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-17 01:00:43.189548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-17 01:00:43.189555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-17 01:00:43.189563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:00:43.189571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:00:43.189575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:00:43.189580 | orchestrator | 2026-03-17 01:00:43.189584 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-17 01:00:43.189587 | orchestrator | Tuesday 17 March 2026 00:59:46 +0000 (0:00:05.857) 0:05:23.395 ********* 2026-03-17 01:00:43.189597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-17 01:00:43.189608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:00:43.189615 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-17 01:00:43.189630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:00:43.189640 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-17 01:00:43.189667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:00:43.189673 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189679 | orchestrator | 2026-03-17 01:00:43.189685 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-17 01:00:43.189691 | orchestrator | Tuesday 17 March 2026 00:59:47 +0000 (0:00:00.634) 0:05:24.029 ********* 2026-03-17 01:00:43.189697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-17 01:00:43.189704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-17 01:00:43.189711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.189717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.189724 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-17 01:00:43.189736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-17 01:00:43.189742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.189751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.189762 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-17 01:00:43.189778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-17 01:00:43.189784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.189790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-17 01:00:43.189797 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189801 | orchestrator | 2026-03-17 01:00:43.189805 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-17 01:00:43.189809 | orchestrator | Tuesday 17 March 2026 00:59:48 +0000 (0:00:00.906) 0:05:24.936 ********* 2026-03-17 01:00:43.189813 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.189817 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.189838 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.189843 | orchestrator | 2026-03-17 01:00:43.189846 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-17 01:00:43.189850 | orchestrator | Tuesday 17 March 2026 00:59:49 +0000 (0:00:01.670) 0:05:26.607 ********* 2026-03-17 01:00:43.189854 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.189858 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.189862 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.189866 | orchestrator | 2026-03-17 01:00:43.189869 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-17 01:00:43.189873 | orchestrator | Tuesday 17 March 2026 00:59:51 +0000 (0:00:02.003) 0:05:28.611 ********* 2026-03-17 01:00:43.189877 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189881 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189885 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189889 | orchestrator | 2026-03-17 01:00:43.189892 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-17 01:00:43.189896 | orchestrator | Tuesday 17 March 2026 00:59:52 +0000 (0:00:00.310) 0:05:28.922 ********* 2026-03-17 01:00:43.189900 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189904 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189908 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189911 | orchestrator | 2026-03-17 01:00:43.189915 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-17 01:00:43.189919 | orchestrator | Tuesday 17 March 2026 00:59:52 +0000 (0:00:00.289) 0:05:29.212 ********* 2026-03-17 01:00:43.189923 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189927 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189931 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189934 | orchestrator | 2026-03-17 01:00:43.189938 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-17 01:00:43.189942 | orchestrator | Tuesday 17 March 2026 00:59:52 +0000 (0:00:00.297) 0:05:29.509 ********* 2026-03-17 01:00:43.189946 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189950 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189953 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189957 | orchestrator | 2026-03-17 01:00:43.189965 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-17 01:00:43.189969 | orchestrator | Tuesday 17 March 2026 00:59:53 +0000 (0:00:00.574) 0:05:30.084 ********* 2026-03-17 01:00:43.189973 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.189977 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.189981 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.189984 | orchestrator | 2026-03-17 01:00:43.189988 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-17 01:00:43.189992 | orchestrator | Tuesday 17 March 2026 00:59:53 +0000 (0:00:00.298) 0:05:30.382 ********* 2026-03-17 01:00:43.189996 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:00:43.190000 | orchestrator | 2026-03-17 01:00:43.190003 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-17 01:00:43.190007 | orchestrator | Tuesday 17 March 2026 00:59:55 +0000 (0:00:01.730) 0:05:32.113 ********* 2026-03-17 01:00:43.190050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.190064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.190072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-17 01:00:43.190078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.190084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.190097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-17 01:00:43.190104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.190115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.190125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-17 01:00:43.190132 | orchestrator | 2026-03-17 01:00:43.190140 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-17 01:00:43.190146 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:02.498) 0:05:34.611 ********* 2026-03-17 01:00:43.190153 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:00:43.190160 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:00:43.190167 | orchestrator | } 2026-03-17 01:00:43.190173 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:00:43.190180 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:00:43.190186 | orchestrator | } 2026-03-17 01:00:43.190193 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:00:43.190200 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:00:43.190206 | orchestrator | } 2026-03-17 01:00:43.190212 | orchestrator | 2026-03-17 01:00:43.190219 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:00:43.190225 | orchestrator | Tuesday 17 March 2026 00:59:58 +0000 (0:00:00.384) 0:05:34.995 ********* 2026-03-17 01:00:43.190229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.190237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.190241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.190245 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.190249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.190256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.190264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.190268 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.190272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-17 01:00:43.190276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-17 01:00:43.190285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-17 01:00:43.190289 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.190293 | orchestrator | 2026-03-17 01:00:43.190297 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-17 01:00:43.190301 | orchestrator | Tuesday 17 March 2026 00:59:59 +0000 (0:00:01.556) 0:05:36.552 ********* 2026-03-17 01:00:43.190304 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.190309 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.190312 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.190316 | orchestrator | 2026-03-17 01:00:43.190320 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-17 01:00:43.190324 | orchestrator | Tuesday 17 March 2026 01:00:00 +0000 (0:00:00.748) 0:05:37.301 ********* 2026-03-17 01:00:43.190327 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.190331 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.190335 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.190339 | orchestrator | 2026-03-17 01:00:43.190343 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-17 01:00:43.190346 | orchestrator | Tuesday 17 March 2026 01:00:00 +0000 (0:00:00.364) 0:05:37.665 ********* 2026-03-17 01:00:43.190350 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.190354 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.190358 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.190361 | orchestrator | 2026-03-17 01:00:43.190365 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-17 01:00:43.190369 | orchestrator | Tuesday 17 March 2026 01:00:02 +0000 (0:00:01.507) 0:05:39.172 ********* 2026-03-17 01:00:43.190373 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.190377 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.190381 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.190384 | orchestrator | 2026-03-17 01:00:43.190388 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-17 01:00:43.190392 | orchestrator | Tuesday 17 March 2026 01:00:03 +0000 (0:00:01.050) 0:05:40.223 ********* 2026-03-17 01:00:43.190398 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.190402 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.190406 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.190410 | orchestrator | 2026-03-17 01:00:43.190414 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-17 01:00:43.190417 | orchestrator | Tuesday 17 March 2026 01:00:04 +0000 (0:00:00.967) 0:05:41.191 ********* 2026-03-17 01:00:43.190421 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.190425 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.190429 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.190433 | orchestrator | 2026-03-17 01:00:43.190436 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-17 01:00:43.190440 | orchestrator | Tuesday 17 March 2026 01:00:14 +0000 (0:00:09.793) 0:05:50.985 ********* 2026-03-17 01:00:43.190444 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.190451 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.190455 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.190458 | orchestrator | 2026-03-17 01:00:43.190462 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-17 01:00:43.190468 | orchestrator | Tuesday 17 March 2026 01:00:15 +0000 (0:00:01.248) 0:05:52.234 ********* 2026-03-17 01:00:43.190472 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.190476 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.190479 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.190483 | orchestrator | 2026-03-17 01:00:43.190487 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-17 01:00:43.190491 | orchestrator | Tuesday 17 March 2026 01:00:23 +0000 (0:00:08.443) 0:06:00.678 ********* 2026-03-17 01:00:43.190495 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.190498 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.190502 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.190506 | orchestrator | 2026-03-17 01:00:43.190510 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-17 01:00:43.190514 | orchestrator | Tuesday 17 March 2026 01:00:27 +0000 (0:00:03.790) 0:06:04.468 ********* 2026-03-17 01:00:43.190517 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:00:43.190521 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:00:43.190525 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:00:43.190529 | orchestrator | 2026-03-17 01:00:43.190532 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-17 01:00:43.190536 | orchestrator | Tuesday 17 March 2026 01:00:35 +0000 (0:00:08.141) 0:06:12.610 ********* 2026-03-17 01:00:43.190540 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.190544 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.190548 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.190551 | orchestrator | 2026-03-17 01:00:43.190555 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-17 01:00:43.190559 | orchestrator | Tuesday 17 March 2026 01:00:36 +0000 (0:00:00.659) 0:06:13.269 ********* 2026-03-17 01:00:43.190563 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.190566 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.190570 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.190574 | orchestrator | 2026-03-17 01:00:43.190578 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-17 01:00:43.190582 | orchestrator | Tuesday 17 March 2026 01:00:36 +0000 (0:00:00.336) 0:06:13.606 ********* 2026-03-17 01:00:43.190585 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.190589 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.190593 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.190597 | orchestrator | 2026-03-17 01:00:43.190601 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-17 01:00:43.190604 | orchestrator | Tuesday 17 March 2026 01:00:37 +0000 (0:00:00.330) 0:06:13.936 ********* 2026-03-17 01:00:43.190608 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.190612 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.190616 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.190619 | orchestrator | 2026-03-17 01:00:43.190623 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-17 01:00:43.190627 | orchestrator | Tuesday 17 March 2026 01:00:37 +0000 (0:00:00.331) 0:06:14.268 ********* 2026-03-17 01:00:43.190631 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.190634 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.190638 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.190642 | orchestrator | 2026-03-17 01:00:43.190646 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-17 01:00:43.190650 | orchestrator | Tuesday 17 March 2026 01:00:38 +0000 (0:00:00.646) 0:06:14.914 ********* 2026-03-17 01:00:43.190653 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:00:43.190657 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:00:43.190666 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:00:43.190670 | orchestrator | 2026-03-17 01:00:43.190674 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-17 01:00:43.190677 | orchestrator | Tuesday 17 March 2026 01:00:38 +0000 (0:00:00.375) 0:06:15.289 ********* 2026-03-17 01:00:43.190681 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.190685 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.190689 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.190693 | orchestrator | 2026-03-17 01:00:43.190696 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-17 01:00:43.190700 | orchestrator | Tuesday 17 March 2026 01:00:39 +0000 (0:00:00.885) 0:06:16.175 ********* 2026-03-17 01:00:43.190704 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:00:43.190708 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:00:43.190711 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:00:43.190715 | orchestrator | 2026-03-17 01:00:43.190719 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:00:43.190723 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-17 01:00:43.190729 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-17 01:00:43.190739 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-17 01:00:43.190745 | orchestrator | 2026-03-17 01:00:43.190751 | orchestrator | 2026-03-17 01:00:43.190757 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:00:43.190763 | orchestrator | Tuesday 17 March 2026 01:00:40 +0000 (0:00:00.775) 0:06:16.950 ********* 2026-03-17 01:00:43.190769 | orchestrator | =============================================================================== 2026-03-17 01:00:43.190775 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.79s 2026-03-17 01:00:43.190780 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.44s 2026-03-17 01:00:43.190786 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.14s 2026-03-17 01:00:43.190793 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.28s 2026-03-17 01:00:43.190798 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.86s 2026-03-17 01:00:43.190807 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.41s 2026-03-17 01:00:43.190813 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.22s 2026-03-17 01:00:43.190818 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.09s 2026-03-17 01:00:43.190839 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 5.09s 2026-03-17 01:00:43.190845 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.94s 2026-03-17 01:00:43.190851 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.92s 2026-03-17 01:00:43.190857 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.92s 2026-03-17 01:00:43.190862 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.81s 2026-03-17 01:00:43.190868 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.64s 2026-03-17 01:00:43.190874 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.36s 2026-03-17 01:00:43.190879 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.27s 2026-03-17 01:00:43.190885 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.21s 2026-03-17 01:00:43.190891 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.20s 2026-03-17 01:00:43.190897 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.19s 2026-03-17 01:00:43.190910 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.95s 2026-03-17 01:00:43.190916 | orchestrator | 2026-03-17 01:00:43 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:00:43.190922 | orchestrator | 2026-03-17 01:00:43 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:00:43.190928 | orchestrator | 2026-03-17 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:46.204762 | orchestrator | 2026-03-17 01:00:46 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:46.206153 | orchestrator | 2026-03-17 01:00:46 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:00:46.208206 | orchestrator | 2026-03-17 01:00:46 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:00:46.208265 | orchestrator | 2026-03-17 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:49.264674 | orchestrator | 2026-03-17 01:00:49 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:49.264811 | orchestrator | 2026-03-17 01:00:49 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:00:49.266042 | orchestrator | 2026-03-17 01:00:49 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:00:49.266075 | orchestrator | 2026-03-17 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:52.300787 | orchestrator | 2026-03-17 01:00:52 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:52.302783 | orchestrator | 2026-03-17 01:00:52 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:00:52.305947 | orchestrator | 2026-03-17 01:00:52 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:00:52.307042 | orchestrator | 2026-03-17 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:55.339695 | orchestrator | 2026-03-17 01:00:55 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:55.342920 | orchestrator | 2026-03-17 01:00:55 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:00:55.343026 | orchestrator | 2026-03-17 01:00:55 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:00:55.343038 | orchestrator | 2026-03-17 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:00:58.377048 | orchestrator | 2026-03-17 01:00:58 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:00:58.377613 | orchestrator | 2026-03-17 01:00:58 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:00:58.378307 | orchestrator | 2026-03-17 01:00:58 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:00:58.378337 | orchestrator | 2026-03-17 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:01.422922 | orchestrator | 2026-03-17 01:01:01 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:01.424415 | orchestrator | 2026-03-17 01:01:01 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:01.425369 | orchestrator | 2026-03-17 01:01:01 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:01.425409 | orchestrator | 2026-03-17 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:04.472056 | orchestrator | 2026-03-17 01:01:04 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:04.473410 | orchestrator | 2026-03-17 01:01:04 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:04.473966 | orchestrator | 2026-03-17 01:01:04 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:04.473979 | orchestrator | 2026-03-17 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:07.534234 | orchestrator | 2026-03-17 01:01:07 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:07.535046 | orchestrator | 2026-03-17 01:01:07 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:07.535584 | orchestrator | 2026-03-17 01:01:07 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:07.535774 | orchestrator | 2026-03-17 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:10.576277 | orchestrator | 2026-03-17 01:01:10 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:10.576403 | orchestrator | 2026-03-17 01:01:10 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:10.576412 | orchestrator | 2026-03-17 01:01:10 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:10.576422 | orchestrator | 2026-03-17 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:13.605054 | orchestrator | 2026-03-17 01:01:13 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:13.605099 | orchestrator | 2026-03-17 01:01:13 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:13.605808 | orchestrator | 2026-03-17 01:01:13 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:13.605881 | orchestrator | 2026-03-17 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:16.643822 | orchestrator | 2026-03-17 01:01:16 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:16.644638 | orchestrator | 2026-03-17 01:01:16 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:16.645896 | orchestrator | 2026-03-17 01:01:16 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:16.645929 | orchestrator | 2026-03-17 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:19.691592 | orchestrator | 2026-03-17 01:01:19 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:19.692639 | orchestrator | 2026-03-17 01:01:19 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:19.694261 | orchestrator | 2026-03-17 01:01:19 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:19.694312 | orchestrator | 2026-03-17 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:22.731976 | orchestrator | 2026-03-17 01:01:22 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:22.734083 | orchestrator | 2026-03-17 01:01:22 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:22.735695 | orchestrator | 2026-03-17 01:01:22 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:22.735763 | orchestrator | 2026-03-17 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:25.775262 | orchestrator | 2026-03-17 01:01:25 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:25.775340 | orchestrator | 2026-03-17 01:01:25 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:25.775757 | orchestrator | 2026-03-17 01:01:25 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:25.775929 | orchestrator | 2026-03-17 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:28.813103 | orchestrator | 2026-03-17 01:01:28 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:28.813583 | orchestrator | 2026-03-17 01:01:28 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:28.815124 | orchestrator | 2026-03-17 01:01:28 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:28.815178 | orchestrator | 2026-03-17 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:31.859009 | orchestrator | 2026-03-17 01:01:31 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:31.859854 | orchestrator | 2026-03-17 01:01:31 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:31.861076 | orchestrator | 2026-03-17 01:01:31 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:31.861133 | orchestrator | 2026-03-17 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:34.917523 | orchestrator | 2026-03-17 01:01:34 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:34.918761 | orchestrator | 2026-03-17 01:01:34 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:34.920733 | orchestrator | 2026-03-17 01:01:34 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:34.921543 | orchestrator | 2026-03-17 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:37.974315 | orchestrator | 2026-03-17 01:01:37 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:37.976295 | orchestrator | 2026-03-17 01:01:37 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:37.978881 | orchestrator | 2026-03-17 01:01:37 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:37.978931 | orchestrator | 2026-03-17 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:41.036520 | orchestrator | 2026-03-17 01:01:41 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:41.037117 | orchestrator | 2026-03-17 01:01:41 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:41.038304 | orchestrator | 2026-03-17 01:01:41 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:41.038440 | orchestrator | 2026-03-17 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:44.088469 | orchestrator | 2026-03-17 01:01:44 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:44.088910 | orchestrator | 2026-03-17 01:01:44 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:44.090128 | orchestrator | 2026-03-17 01:01:44 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:44.090166 | orchestrator | 2026-03-17 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:47.141709 | orchestrator | 2026-03-17 01:01:47 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:47.147401 | orchestrator | 2026-03-17 01:01:47 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:47.149880 | orchestrator | 2026-03-17 01:01:47 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:47.149954 | orchestrator | 2026-03-17 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:50.198891 | orchestrator | 2026-03-17 01:01:50 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:50.199704 | orchestrator | 2026-03-17 01:01:50 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:50.201974 | orchestrator | 2026-03-17 01:01:50 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:50.202044 | orchestrator | 2026-03-17 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:53.241117 | orchestrator | 2026-03-17 01:01:53 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:53.241862 | orchestrator | 2026-03-17 01:01:53 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:53.242423 | orchestrator | 2026-03-17 01:01:53 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:53.242459 | orchestrator | 2026-03-17 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:56.293393 | orchestrator | 2026-03-17 01:01:56 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:56.294421 | orchestrator | 2026-03-17 01:01:56 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:56.295997 | orchestrator | 2026-03-17 01:01:56 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:56.296051 | orchestrator | 2026-03-17 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:01:59.349587 | orchestrator | 2026-03-17 01:01:59 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:01:59.350488 | orchestrator | 2026-03-17 01:01:59 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:01:59.351838 | orchestrator | 2026-03-17 01:01:59 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:01:59.351896 | orchestrator | 2026-03-17 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:02.406083 | orchestrator | 2026-03-17 01:02:02 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:02.408053 | orchestrator | 2026-03-17 01:02:02 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:02.409645 | orchestrator | 2026-03-17 01:02:02 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:02.409690 | orchestrator | 2026-03-17 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:05.459292 | orchestrator | 2026-03-17 01:02:05 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:05.462153 | orchestrator | 2026-03-17 01:02:05 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:05.463513 | orchestrator | 2026-03-17 01:02:05 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:05.463931 | orchestrator | 2026-03-17 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:08.505284 | orchestrator | 2026-03-17 01:02:08 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:08.506359 | orchestrator | 2026-03-17 01:02:08 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:08.507614 | orchestrator | 2026-03-17 01:02:08 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:08.507687 | orchestrator | 2026-03-17 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:11.550537 | orchestrator | 2026-03-17 01:02:11 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:11.552017 | orchestrator | 2026-03-17 01:02:11 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:11.554222 | orchestrator | 2026-03-17 01:02:11 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:11.554915 | orchestrator | 2026-03-17 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:14.600523 | orchestrator | 2026-03-17 01:02:14 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:14.603669 | orchestrator | 2026-03-17 01:02:14 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:14.606677 | orchestrator | 2026-03-17 01:02:14 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:14.606725 | orchestrator | 2026-03-17 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:17.658338 | orchestrator | 2026-03-17 01:02:17 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:17.660576 | orchestrator | 2026-03-17 01:02:17 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:17.662512 | orchestrator | 2026-03-17 01:02:17 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:17.662822 | orchestrator | 2026-03-17 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:20.708007 | orchestrator | 2026-03-17 01:02:20 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:20.709893 | orchestrator | 2026-03-17 01:02:20 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:20.711587 | orchestrator | 2026-03-17 01:02:20 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:20.711639 | orchestrator | 2026-03-17 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:23.753688 | orchestrator | 2026-03-17 01:02:23 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:23.755372 | orchestrator | 2026-03-17 01:02:23 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:23.757055 | orchestrator | 2026-03-17 01:02:23 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:23.757111 | orchestrator | 2026-03-17 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:26.803824 | orchestrator | 2026-03-17 01:02:26 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:26.803923 | orchestrator | 2026-03-17 01:02:26 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:26.804789 | orchestrator | 2026-03-17 01:02:26 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:26.804854 | orchestrator | 2026-03-17 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:29.849038 | orchestrator | 2026-03-17 01:02:29 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:29.850688 | orchestrator | 2026-03-17 01:02:29 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:29.852219 | orchestrator | 2026-03-17 01:02:29 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:29.852280 | orchestrator | 2026-03-17 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:32.881366 | orchestrator | 2026-03-17 01:02:32 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state STARTED 2026-03-17 01:02:32.881783 | orchestrator | 2026-03-17 01:02:32 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:32.882883 | orchestrator | 2026-03-17 01:02:32 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:32.882908 | orchestrator | 2026-03-17 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:35.928060 | orchestrator | 2026-03-17 01:02:35 | INFO  | Task dc78f10f-b06b-4988-9d0b-a01b0f4aa011 is in state SUCCESS 2026-03-17 01:02:35.929958 | orchestrator | 2026-03-17 01:02:35.930003 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 01:02:35.930009 | orchestrator | 2.16.14 2026-03-17 01:02:35.930041 | orchestrator | 2026-03-17 01:02:35.930045 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-17 01:02:35.930050 | orchestrator | 2026-03-17 01:02:35.930054 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-17 01:02:35.930058 | orchestrator | Tuesday 17 March 2026 00:51:59 +0000 (0:00:00.871) 0:00:00.871 ********* 2026-03-17 01:02:35.930063 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.930068 | orchestrator | 2026-03-17 01:02:35.930072 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-17 01:02:35.930076 | orchestrator | Tuesday 17 March 2026 00:52:01 +0000 (0:00:01.333) 0:00:02.205 ********* 2026-03-17 01:02:35.930080 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.930084 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.930088 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.930091 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.930095 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.930099 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.930103 | orchestrator | 2026-03-17 01:02:35.930107 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-17 01:02:35.930111 | orchestrator | Tuesday 17 March 2026 00:52:02 +0000 (0:00:01.774) 0:00:03.980 ********* 2026-03-17 01:02:35.930115 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.930119 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.930123 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.930126 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.930130 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.930134 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.930138 | orchestrator | 2026-03-17 01:02:35.930142 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-17 01:02:35.930146 | orchestrator | Tuesday 17 March 2026 00:52:03 +0000 (0:00:00.618) 0:00:04.599 ********* 2026-03-17 01:02:35.930150 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.930154 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.930157 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.930161 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.930166 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.930169 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.930173 | orchestrator | 2026-03-17 01:02:35.930177 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-17 01:02:35.930194 | orchestrator | Tuesday 17 March 2026 00:52:04 +0000 (0:00:00.944) 0:00:05.544 ********* 2026-03-17 01:02:35.930198 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.930202 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.930206 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.930210 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.930214 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.930218 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.930222 | orchestrator | 2026-03-17 01:02:35.930236 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-17 01:02:35.930240 | orchestrator | Tuesday 17 March 2026 00:52:05 +0000 (0:00:00.980) 0:00:06.524 ********* 2026-03-17 01:02:35.930244 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.930248 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.930252 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.930256 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.930260 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.930263 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.930267 | orchestrator | 2026-03-17 01:02:35.930271 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-17 01:02:35.930275 | orchestrator | Tuesday 17 March 2026 00:52:06 +0000 (0:00:00.886) 0:00:07.410 ********* 2026-03-17 01:02:35.930279 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.930318 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.930324 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.930328 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.930332 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.930336 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.930339 | orchestrator | 2026-03-17 01:02:35.930343 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-17 01:02:35.930391 | orchestrator | Tuesday 17 March 2026 00:52:07 +0000 (0:00:01.039) 0:00:08.450 ********* 2026-03-17 01:02:35.930397 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.930401 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.930405 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.930409 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.930413 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.930417 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.930421 | orchestrator | 2026-03-17 01:02:35.930424 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-17 01:02:35.930428 | orchestrator | Tuesday 17 March 2026 00:52:08 +0000 (0:00:00.981) 0:00:09.432 ********* 2026-03-17 01:02:35.930432 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.930436 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.930440 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.930444 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.930447 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.930451 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.930455 | orchestrator | 2026-03-17 01:02:35.930459 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-17 01:02:35.930591 | orchestrator | Tuesday 17 March 2026 00:52:09 +0000 (0:00:00.951) 0:00:10.383 ********* 2026-03-17 01:02:35.930600 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:02:35.930605 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:02:35.930609 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:02:35.930612 | orchestrator | 2026-03-17 01:02:35.930616 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-17 01:02:35.930620 | orchestrator | Tuesday 17 March 2026 00:52:10 +0000 (0:00:00.922) 0:00:11.306 ********* 2026-03-17 01:02:35.930624 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.930628 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.930632 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.930644 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.930648 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.930652 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.930655 | orchestrator | 2026-03-17 01:02:35.930659 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-17 01:02:35.930663 | orchestrator | Tuesday 17 March 2026 00:52:11 +0000 (0:00:01.747) 0:00:13.053 ********* 2026-03-17 01:02:35.930667 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:02:35.930671 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:02:35.930680 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:02:35.930684 | orchestrator | 2026-03-17 01:02:35.930688 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-17 01:02:35.930692 | orchestrator | Tuesday 17 March 2026 00:52:14 +0000 (0:00:02.816) 0:00:15.870 ********* 2026-03-17 01:02:35.930696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 01:02:35.930699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 01:02:35.930703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 01:02:35.930707 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.930711 | orchestrator | 2026-03-17 01:02:35.930728 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-17 01:02:35.930735 | orchestrator | Tuesday 17 March 2026 00:52:15 +0000 (0:00:00.416) 0:00:16.286 ********* 2026-03-17 01:02:35.930743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.930752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.930763 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.930769 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.930775 | orchestrator | 2026-03-17 01:02:35.930781 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-17 01:02:35.930787 | orchestrator | Tuesday 17 March 2026 00:52:15 +0000 (0:00:00.895) 0:00:17.182 ********* 2026-03-17 01:02:35.930794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.930801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.930807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.930813 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.930820 | orchestrator | 2026-03-17 01:02:35.930826 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-17 01:02:35.930833 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:00.404) 0:00:17.586 ********* 2026-03-17 01:02:35.930847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-17 00:52:12.816782', 'end': '2026-03-17 00:52:12.932261', 'delta': '0:00:00.115479', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.930862 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-17 00:52:13.512079', 'end': '2026-03-17 00:52:13.620111', 'delta': '0:00:00.108032', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.930870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-17 00:52:14.462959', 'end': '2026-03-17 00:52:14.555114', 'delta': '0:00:00.092155', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.930877 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.930884 | orchestrator | 2026-03-17 01:02:35.930937 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-17 01:02:35.930942 | orchestrator | Tuesday 17 March 2026 00:52:16 +0000 (0:00:00.446) 0:00:18.034 ********* 2026-03-17 01:02:35.930946 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.930950 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.930954 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.930958 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.931198 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.931207 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.931213 | orchestrator | 2026-03-17 01:02:35.931219 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-17 01:02:35.931225 | orchestrator | Tuesday 17 March 2026 00:52:17 +0000 (0:00:01.057) 0:00:19.091 ********* 2026-03-17 01:02:35.931232 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.931238 | orchestrator | 2026-03-17 01:02:35.931245 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-17 01:02:35.931250 | orchestrator | Tuesday 17 March 2026 00:52:18 +0000 (0:00:00.601) 0:00:19.693 ********* 2026-03-17 01:02:35.931253 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931257 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931261 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931265 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931269 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931272 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931276 | orchestrator | 2026-03-17 01:02:35.931280 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-17 01:02:35.931284 | orchestrator | Tuesday 17 March 2026 00:52:20 +0000 (0:00:01.729) 0:00:21.423 ********* 2026-03-17 01:02:35.931287 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931291 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931300 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931303 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931307 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931311 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931315 | orchestrator | 2026-03-17 01:02:35.931318 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 01:02:35.931322 | orchestrator | Tuesday 17 March 2026 00:52:22 +0000 (0:00:02.043) 0:00:23.466 ********* 2026-03-17 01:02:35.931326 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931330 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931333 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931337 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931341 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931345 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931349 | orchestrator | 2026-03-17 01:02:35.931352 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-17 01:02:35.931356 | orchestrator | Tuesday 17 March 2026 00:52:23 +0000 (0:00:00.985) 0:00:24.451 ********* 2026-03-17 01:02:35.931360 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931364 | orchestrator | 2026-03-17 01:02:35.931368 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-17 01:02:35.931371 | orchestrator | Tuesday 17 March 2026 00:52:23 +0000 (0:00:00.283) 0:00:24.735 ********* 2026-03-17 01:02:35.931375 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931379 | orchestrator | 2026-03-17 01:02:35.931383 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 01:02:35.931386 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:00.473) 0:00:25.208 ********* 2026-03-17 01:02:35.931390 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931394 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931398 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931416 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931420 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931424 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931428 | orchestrator | 2026-03-17 01:02:35.931432 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-17 01:02:35.931436 | orchestrator | Tuesday 17 March 2026 00:52:24 +0000 (0:00:00.545) 0:00:25.753 ********* 2026-03-17 01:02:35.931440 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931443 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931447 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931451 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931454 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931458 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931462 | orchestrator | 2026-03-17 01:02:35.931466 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-17 01:02:35.931469 | orchestrator | Tuesday 17 March 2026 00:52:25 +0000 (0:00:01.042) 0:00:26.796 ********* 2026-03-17 01:02:35.931473 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931477 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931481 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931485 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931489 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931492 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931496 | orchestrator | 2026-03-17 01:02:35.931500 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-17 01:02:35.931504 | orchestrator | Tuesday 17 March 2026 00:52:26 +0000 (0:00:00.633) 0:00:27.430 ********* 2026-03-17 01:02:35.931507 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931511 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931515 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931519 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931523 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931529 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931533 | orchestrator | 2026-03-17 01:02:35.931537 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-17 01:02:35.931540 | orchestrator | Tuesday 17 March 2026 00:52:27 +0000 (0:00:00.849) 0:00:28.279 ********* 2026-03-17 01:02:35.931544 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931548 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931552 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931555 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931559 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931563 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931567 | orchestrator | 2026-03-17 01:02:35.931574 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-17 01:02:35.931578 | orchestrator | Tuesday 17 March 2026 00:52:27 +0000 (0:00:00.763) 0:00:29.042 ********* 2026-03-17 01:02:35.931582 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931586 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931622 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931626 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931643 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931647 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931651 | orchestrator | 2026-03-17 01:02:35.931655 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-17 01:02:35.931802 | orchestrator | Tuesday 17 March 2026 00:52:28 +0000 (0:00:00.815) 0:00:29.858 ********* 2026-03-17 01:02:35.931810 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.931814 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.931818 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.931822 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.931826 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.931830 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.931865 | orchestrator | 2026-03-17 01:02:35.931869 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-17 01:02:35.931873 | orchestrator | Tuesday 17 March 2026 00:52:29 +0000 (0:00:00.662) 0:00:30.521 ********* 2026-03-17 01:02:35.931878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16ca22cf--64f9--579d--994c--d43933026c5f-osd--block--16ca22cf--64f9--579d--994c--d43933026c5f', 'dm-uuid-LVM-y2HbUUaZfCONiEzQN3cazUkYUoAkrZdHW8PKjpGId1qTLuMh3ALH0t52wbEKMY8J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5-osd--block--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5', 'dm-uuid-LVM-JHeqYSnhBZTczYlYzdSyJxeUPOE5DyFmwNGrA98SMV8wmMFvK1WpqrCejcqRorYA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.931963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--16ca22cf--64f9--579d--994c--d43933026c5f-osd--block--16ca22cf--64f9--579d--994c--d43933026c5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cDgNKN-65o9-GCYm-jd5N-jxY5-Xwfs-AuB9us', 'scsi-0QEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1', 'scsi-SQEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.931968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d77b95b6--dc37--5eed--9a6e--c7871424e120-osd--block--d77b95b6--dc37--5eed--9a6e--c7871424e120', 'dm-uuid-LVM-HqNVUzr8tfZe3LbFOrpJzLVzQO0BoGHOfw6I8RT5B3XStRo9OHByj7YlEavSR3LT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.931972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5-osd--block--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SXu2t4-xlmT-nWR5-Vn1s-LLKz-MhzX-OInbL9', 'scsi-0QEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184', 'scsi-SQEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.931976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d', 'scsi-SQEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.931992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.931999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ec88a4df--1f79--596d--b281--118c477c78df-osd--block--ec88a4df--1f79--596d--b281--118c477c78df', 'dm-uuid-LVM-jWrHNBceoo0lz8m0pcwMKXx2PYvwcJVmqiWNOrWp1aheViUA724rHCoEH3YjDjN0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50c44467--b3f7--539a--99b7--df2211d1583b-osd--block--50c44467--b3f7--539a--99b7--df2211d1583b', 'dm-uuid-LVM-iBPoFze9hkTVnKW4shdae6O6KrVi6HnK8GsOucTdh8eWFD4mzU14n9FDjGCSir6w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9465b490--647b--5adb--8e2e--a5649c4bc673-osd--block--9465b490--647b--5adb--8e2e--a5649c4bc673', 'dm-uuid-LVM-Zam2M2X1xaV047uPshlTJTQeMm2QQ29xiPaMt6CCMJ8QQK5C3Ff1lJKKRu3FerJY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part1', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part14', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part15', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part16', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d77b95b6--dc37--5eed--9a6e--c7871424e120-osd--block--d77b95b6--dc37--5eed--9a6e--c7871424e120'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YEn508-grn6-JU5N-zREC-OznN-9GB5-smBjJ5', 'scsi-0QEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235', 'scsi-SQEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ec88a4df--1f79--596d--b281--118c477c78df-osd--block--ec88a4df--1f79--596d--b281--118c477c78df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gv1WXC-350m-0b7t-fELq-YK9T-Jau5-utKItL', 'scsi-0QEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32', 'scsi-SQEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b', 'scsi-SQEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932140 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.932144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--50c44467--b3f7--539a--99b7--df2211d1583b-osd--block--50c44467--b3f7--539a--99b7--df2211d1583b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zq4wmp-0FMJ-yEfL-PBHg-uBmH-1kra-xy1Esb', 'scsi-0QEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7', 'scsi-SQEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9465b490--647b--5adb--8e2e--a5649c4bc673-osd--block--9465b490--647b--5adb--8e2e--a5649c4bc673'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NjfmJl-xYO1-1oP1-2iIM-GqNQ-TrFA-8xMy2e', 'scsi-0QEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865', 'scsi-SQEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276', 'scsi-SQEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932405 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.932412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part1', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part14', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part15', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part16', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932623 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.932627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932653 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.932684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part1', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part14', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part15', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part16', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932695 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.932703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:02:35.932847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part1', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part14', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part15', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part16', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:02:35.932923 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.932928 | orchestrator | 2026-03-17 01:02:35.932932 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-17 01:02:35.932936 | orchestrator | Tuesday 17 March 2026 00:52:30 +0000 (0:00:01.198) 0:00:31.720 ********* 2026-03-17 01:02:35.932942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16ca22cf--64f9--579d--994c--d43933026c5f-osd--block--16ca22cf--64f9--579d--994c--d43933026c5f', 'dm-uuid-LVM-y2HbUUaZfCONiEzQN3cazUkYUoAkrZdHW8PKjpGId1qTLuMh3ALH0t52wbEKMY8J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.932949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5-osd--block--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5', 'dm-uuid-LVM-JHeqYSnhBZTczYlYzdSyJxeUPOE5DyFmwNGrA98SMV8wmMFvK1WpqrCejcqRorYA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.932957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.932962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.932966 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.932981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.932985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.932990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933048 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d77b95b6--dc37--5eed--9a6e--c7871424e120-osd--block--d77b95b6--dc37--5eed--9a6e--c7871424e120', 'dm-uuid-LVM-HqNVUzr8tfZe3LbFOrpJzLVzQO0BoGHOfw6I8RT5B3XStRo9OHByj7YlEavSR3LT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933319 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ec88a4df--1f79--596d--b281--118c477c78df-osd--block--ec88a4df--1f79--596d--b281--118c477c78df', 'dm-uuid-LVM-jWrHNBceoo0lz8m0pcwMKXx2PYvwcJVmqiWNOrWp1aheViUA724rHCoEH3YjDjN0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--16ca22cf--64f9--579d--994c--d43933026c5f-osd--block--16ca22cf--64f9--579d--994c--d43933026c5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cDgNKN-65o9-GCYm-jd5N-jxY5-Xwfs-AuB9us', 'scsi-0QEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1', 'scsi-SQEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933379 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5-osd--block--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SXu2t4-xlmT-nWR5-Vn1s-LLKz-MhzX-OInbL9', 'scsi-0QEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184', 'scsi-SQEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933412 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933416 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50c44467--b3f7--539a--99b7--df2211d1583b-osd--block--50c44467--b3f7--539a--99b7--df2211d1583b', 'dm-uuid-LVM-iBPoFze9hkTVnKW4shdae6O6KrVi6HnK8GsOucTdh8eWFD4mzU14n9FDjGCSir6w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933421 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9465b490--647b--5adb--8e2e--a5649c4bc673-osd--block--9465b490--647b--5adb--8e2e--a5649c4bc673', 'dm-uuid-LVM-Zam2M2X1xaV047uPshlTJTQeMm2QQ29xiPaMt6CCMJ8QQK5C3Ff1lJKKRu3FerJY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933452 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933458 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933462 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d', 'scsi-SQEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933581 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933607 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933614 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part1', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part14', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part15', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part16', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933665 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d77b95b6--dc37--5eed--9a6e--c7871424e120-osd--block--d77b95b6--dc37--5eed--9a6e--c7871424e120'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YEn508-grn6-JU5N-zREC-OznN-9GB5-smBjJ5', 'scsi-0QEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235', 'scsi-SQEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933700 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ec88a4df--1f79--596d--b281--118c477c78df-osd--block--ec88a4df--1f79--596d--b281--118c477c78df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gv1WXC-350m-0b7t-fELq-YK9T-Jau5-utKItL', 'scsi-0QEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32', 'scsi-SQEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b', 'scsi-SQEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--50c44467--b3f7--539a--99b7--df2211d1583b-osd--block--50c44467--b3f7--539a--99b7--df2211d1583b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zq4wmp-0FMJ-yEfL-PBHg-uBmH-1kra-xy1Esb', 'scsi-0QEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7', 'scsi-SQEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933782 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9465b490--647b--5adb--8e2e--a5649c4bc673-osd--block--9465b490--647b--5adb--8e2e--a5649c4bc673'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NjfmJl-xYO1-1oP1-2iIM-GqNQ-TrFA-8xMy2e', 'scsi-0QEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865', 'scsi-SQEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933786 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.933790 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276', 'scsi-SQEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933838 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933847 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.933861 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933872 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933881 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933888 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933895 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933907 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933942 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933951 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933958 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part1', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part14', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part15', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part16', 'scsi-SQEMU_QEMU_HARDDISK_7eabd72e-ea70-47b9-ae5c-bbb511389266-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933985 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.933994 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934000 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934005 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934009 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934052 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934086 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934096 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934103 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part1', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part14', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part15', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part16', 'scsi-SQEMU_QEMU_HARDDISK_22c407cf-e116-4808-97ee-42321e6f678c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934107 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934137 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.934145 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.934149 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.934153 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934157 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934164 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934168 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934172 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934176 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934204 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934215 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934227 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part1', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part14', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part15', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part16', 'scsi-SQEMU_QEMU_HARDDISK_f9f9bdf5-53bb-40c1-a0f3-235d84124d2c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934232 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:02:35.934239 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.934243 | orchestrator | 2026-03-17 01:02:35.934273 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-17 01:02:35.934278 | orchestrator | Tuesday 17 March 2026 00:52:32 +0000 (0:00:01.643) 0:00:33.363 ********* 2026-03-17 01:02:35.934282 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.934286 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.934290 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.934294 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.934298 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.934302 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.934308 | orchestrator | 2026-03-17 01:02:35.934315 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-17 01:02:35.934323 | orchestrator | Tuesday 17 March 2026 00:52:33 +0000 (0:00:01.642) 0:00:35.006 ********* 2026-03-17 01:02:35.934331 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.934346 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.934352 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.934358 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.934364 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.934370 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.934375 | orchestrator | 2026-03-17 01:02:35.934381 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 01:02:35.934388 | orchestrator | Tuesday 17 March 2026 00:52:34 +0000 (0:00:00.811) 0:00:35.817 ********* 2026-03-17 01:02:35.934394 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.934400 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.934406 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.934412 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.934418 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.934423 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.934429 | orchestrator | 2026-03-17 01:02:35.934435 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 01:02:35.934440 | orchestrator | Tuesday 17 March 2026 00:52:35 +0000 (0:00:01.192) 0:00:37.010 ********* 2026-03-17 01:02:35.934446 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.934452 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.934457 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.934462 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.934475 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.934482 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.934488 | orchestrator | 2026-03-17 01:02:35.934495 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 01:02:35.934501 | orchestrator | Tuesday 17 March 2026 00:52:36 +0000 (0:00:00.744) 0:00:37.754 ********* 2026-03-17 01:02:35.934507 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.934513 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.934524 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.934529 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.934533 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.934537 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.934541 | orchestrator | 2026-03-17 01:02:35.934545 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 01:02:35.934548 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:00.695) 0:00:38.450 ********* 2026-03-17 01:02:35.934553 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.934556 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.934560 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.934564 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.934568 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.934576 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.934579 | orchestrator | 2026-03-17 01:02:35.934583 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-17 01:02:35.934587 | orchestrator | Tuesday 17 March 2026 00:52:37 +0000 (0:00:00.729) 0:00:39.180 ********* 2026-03-17 01:02:35.934591 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-17 01:02:35.934595 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-17 01:02:35.934599 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-17 01:02:35.934603 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 01:02:35.934606 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-17 01:02:35.934610 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-17 01:02:35.934614 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-17 01:02:35.934618 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-17 01:02:35.934621 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-17 01:02:35.934625 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-17 01:02:35.934629 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-17 01:02:35.934633 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-17 01:02:35.934636 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-17 01:02:35.934641 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-17 01:02:35.934644 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-17 01:02:35.934648 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-17 01:02:35.934652 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-17 01:02:35.934656 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-17 01:02:35.934660 | orchestrator | 2026-03-17 01:02:35.934663 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-17 01:02:35.934667 | orchestrator | Tuesday 17 March 2026 00:52:41 +0000 (0:00:03.790) 0:00:42.970 ********* 2026-03-17 01:02:35.934671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 01:02:35.934675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 01:02:35.934679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 01:02:35.934682 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.934686 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-17 01:02:35.934690 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-17 01:02:35.934694 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-17 01:02:35.934698 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.934701 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-17 01:02:35.934747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-17 01:02:35.934755 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-17 01:02:35.934761 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.934767 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 01:02:35.934774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 01:02:35.934781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 01:02:35.934787 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.934793 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-17 01:02:35.934799 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-17 01:02:35.934803 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-17 01:02:35.934807 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.934811 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-17 01:02:35.934815 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-17 01:02:35.934819 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-17 01:02:35.934826 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.934830 | orchestrator | 2026-03-17 01:02:35.934834 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-17 01:02:35.934837 | orchestrator | Tuesday 17 March 2026 00:52:42 +0000 (0:00:01.162) 0:00:44.133 ********* 2026-03-17 01:02:35.934841 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.934845 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.934849 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.934853 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.934857 | orchestrator | 2026-03-17 01:02:35.934861 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-17 01:02:35.934866 | orchestrator | Tuesday 17 March 2026 00:52:44 +0000 (0:00:01.414) 0:00:45.547 ********* 2026-03-17 01:02:35.934870 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.934873 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.934877 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.934881 | orchestrator | 2026-03-17 01:02:35.934885 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-17 01:02:35.934891 | orchestrator | Tuesday 17 March 2026 00:52:44 +0000 (0:00:00.359) 0:00:45.906 ********* 2026-03-17 01:02:35.934895 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.934899 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.934903 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.934907 | orchestrator | 2026-03-17 01:02:35.934911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-17 01:02:35.934914 | orchestrator | Tuesday 17 March 2026 00:52:45 +0000 (0:00:00.351) 0:00:46.258 ********* 2026-03-17 01:02:35.934919 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.934924 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.934928 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.934932 | orchestrator | 2026-03-17 01:02:35.934937 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-17 01:02:35.934942 | orchestrator | Tuesday 17 March 2026 00:52:45 +0000 (0:00:00.335) 0:00:46.594 ********* 2026-03-17 01:02:35.934946 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.934951 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.934955 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.934960 | orchestrator | 2026-03-17 01:02:35.934964 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-17 01:02:35.934968 | orchestrator | Tuesday 17 March 2026 00:52:46 +0000 (0:00:00.953) 0:00:47.547 ********* 2026-03-17 01:02:35.934973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.934977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.934982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.934986 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.934990 | orchestrator | 2026-03-17 01:02:35.934994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-17 01:02:35.934999 | orchestrator | Tuesday 17 March 2026 00:52:47 +0000 (0:00:00.740) 0:00:48.288 ********* 2026-03-17 01:02:35.935004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.935008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.935012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.935017 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935021 | orchestrator | 2026-03-17 01:02:35.935025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-17 01:02:35.935030 | orchestrator | Tuesday 17 March 2026 00:52:47 +0000 (0:00:00.442) 0:00:48.730 ********* 2026-03-17 01:02:35.935035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.935042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.935047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.935051 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935056 | orchestrator | 2026-03-17 01:02:35.935060 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-17 01:02:35.935067 | orchestrator | Tuesday 17 March 2026 00:52:47 +0000 (0:00:00.429) 0:00:49.160 ********* 2026-03-17 01:02:35.935073 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935078 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935084 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935090 | orchestrator | 2026-03-17 01:02:35.935097 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-17 01:02:35.935103 | orchestrator | Tuesday 17 March 2026 00:52:48 +0000 (0:00:00.494) 0:00:49.655 ********* 2026-03-17 01:02:35.935109 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 01:02:35.935116 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-17 01:02:35.935142 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 01:02:35.935147 | orchestrator | 2026-03-17 01:02:35.935151 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-17 01:02:35.935155 | orchestrator | Tuesday 17 March 2026 00:52:49 +0000 (0:00:01.476) 0:00:51.131 ********* 2026-03-17 01:02:35.935159 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:02:35.935163 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:02:35.935167 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:02:35.935171 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 01:02:35.935174 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 01:02:35.935178 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 01:02:35.935182 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 01:02:35.935186 | orchestrator | 2026-03-17 01:02:35.935190 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-17 01:02:35.935193 | orchestrator | Tuesday 17 March 2026 00:52:51 +0000 (0:00:01.124) 0:00:52.256 ********* 2026-03-17 01:02:35.935197 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:02:35.935201 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:02:35.935205 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:02:35.935209 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 01:02:35.935212 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 01:02:35.935216 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 01:02:35.935220 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 01:02:35.935224 | orchestrator | 2026-03-17 01:02:35.935228 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 01:02:35.935235 | orchestrator | Tuesday 17 March 2026 00:52:52 +0000 (0:00:01.802) 0:00:54.059 ********* 2026-03-17 01:02:35.935239 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.935244 | orchestrator | 2026-03-17 01:02:35.935248 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 01:02:35.935251 | orchestrator | Tuesday 17 March 2026 00:52:53 +0000 (0:00:01.034) 0:00:55.093 ********* 2026-03-17 01:02:35.935255 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.935263 | orchestrator | 2026-03-17 01:02:35.935267 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 01:02:35.935270 | orchestrator | Tuesday 17 March 2026 00:52:54 +0000 (0:00:01.056) 0:00:56.150 ********* 2026-03-17 01:02:35.935274 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935278 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.935282 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.935286 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.935290 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.935293 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.935297 | orchestrator | 2026-03-17 01:02:35.935301 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 01:02:35.935305 | orchestrator | Tuesday 17 March 2026 00:52:55 +0000 (0:00:00.966) 0:00:57.117 ********* 2026-03-17 01:02:35.935308 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935312 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935316 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935320 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935324 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935327 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935331 | orchestrator | 2026-03-17 01:02:35.935335 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 01:02:35.935339 | orchestrator | Tuesday 17 March 2026 00:52:56 +0000 (0:00:00.804) 0:00:57.922 ********* 2026-03-17 01:02:35.935343 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935346 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935350 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935354 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935358 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935361 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935365 | orchestrator | 2026-03-17 01:02:35.935369 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 01:02:35.935373 | orchestrator | Tuesday 17 March 2026 00:52:57 +0000 (0:00:00.704) 0:00:58.626 ********* 2026-03-17 01:02:35.935377 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935380 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935384 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935388 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935392 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935395 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935399 | orchestrator | 2026-03-17 01:02:35.935403 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 01:02:35.935407 | orchestrator | Tuesday 17 March 2026 00:52:58 +0000 (0:00:00.881) 0:00:59.508 ********* 2026-03-17 01:02:35.935411 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935414 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.935418 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.935422 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.935426 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.935442 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.935446 | orchestrator | 2026-03-17 01:02:35.935450 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 01:02:35.935454 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.899) 0:01:00.407 ********* 2026-03-17 01:02:35.935458 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935461 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.935465 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.935469 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935473 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935477 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935480 | orchestrator | 2026-03-17 01:02:35.935484 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 01:02:35.935488 | orchestrator | Tuesday 17 March 2026 00:52:59 +0000 (0:00:00.677) 0:01:01.085 ********* 2026-03-17 01:02:35.935494 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935498 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.935502 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.935506 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935510 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935514 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935517 | orchestrator | 2026-03-17 01:02:35.935521 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 01:02:35.935525 | orchestrator | Tuesday 17 March 2026 00:53:01 +0000 (0:00:01.222) 0:01:02.307 ********* 2026-03-17 01:02:35.935529 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935533 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935537 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935540 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.935544 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.935548 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.935552 | orchestrator | 2026-03-17 01:02:35.935556 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 01:02:35.935559 | orchestrator | Tuesday 17 March 2026 00:53:02 +0000 (0:00:01.619) 0:01:03.927 ********* 2026-03-17 01:02:35.935564 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935567 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935571 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935575 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.935579 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.935582 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.935586 | orchestrator | 2026-03-17 01:02:35.935590 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 01:02:35.935594 | orchestrator | Tuesday 17 March 2026 00:53:04 +0000 (0:00:01.722) 0:01:05.649 ********* 2026-03-17 01:02:35.935600 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935604 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.935608 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.935611 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935615 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935619 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935623 | orchestrator | 2026-03-17 01:02:35.935627 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 01:02:35.935631 | orchestrator | Tuesday 17 March 2026 00:53:05 +0000 (0:00:00.877) 0:01:06.527 ********* 2026-03-17 01:02:35.935634 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935638 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.935642 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.935646 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.935650 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.935653 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.935657 | orchestrator | 2026-03-17 01:02:35.935661 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 01:02:35.935665 | orchestrator | Tuesday 17 March 2026 00:53:06 +0000 (0:00:01.334) 0:01:07.862 ********* 2026-03-17 01:02:35.935669 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935673 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935676 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935680 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935684 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935688 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935692 | orchestrator | 2026-03-17 01:02:35.935696 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 01:02:35.935699 | orchestrator | Tuesday 17 March 2026 00:53:07 +0000 (0:00:01.255) 0:01:09.117 ********* 2026-03-17 01:02:35.935703 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935707 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935711 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935724 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935732 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935736 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935740 | orchestrator | 2026-03-17 01:02:35.935744 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 01:02:35.935748 | orchestrator | Tuesday 17 March 2026 00:53:08 +0000 (0:00:00.938) 0:01:10.056 ********* 2026-03-17 01:02:35.935752 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935755 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935759 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935763 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935767 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935771 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935774 | orchestrator | 2026-03-17 01:02:35.935778 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 01:02:35.935782 | orchestrator | Tuesday 17 March 2026 00:53:10 +0000 (0:00:01.279) 0:01:11.336 ********* 2026-03-17 01:02:35.935786 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935790 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.935793 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.935797 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935801 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935805 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935809 | orchestrator | 2026-03-17 01:02:35.935812 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 01:02:35.935816 | orchestrator | Tuesday 17 March 2026 00:53:10 +0000 (0:00:00.770) 0:01:12.106 ********* 2026-03-17 01:02:35.935820 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935824 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.935828 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.935832 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.935849 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.935854 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.935858 | orchestrator | 2026-03-17 01:02:35.935861 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 01:02:35.935865 | orchestrator | Tuesday 17 March 2026 00:53:12 +0000 (0:00:01.111) 0:01:13.218 ********* 2026-03-17 01:02:35.935869 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.935873 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.935877 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.935881 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.935884 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.935888 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.935892 | orchestrator | 2026-03-17 01:02:35.935896 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 01:02:35.935900 | orchestrator | Tuesday 17 March 2026 00:53:12 +0000 (0:00:00.836) 0:01:14.054 ********* 2026-03-17 01:02:35.935904 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935908 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935911 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935915 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.935919 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.935923 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.935926 | orchestrator | 2026-03-17 01:02:35.935930 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 01:02:35.935934 | orchestrator | Tuesday 17 March 2026 00:53:14 +0000 (0:00:01.499) 0:01:15.553 ********* 2026-03-17 01:02:35.935938 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.935942 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.935946 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.935950 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.935954 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.935958 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.935961 | orchestrator | 2026-03-17 01:02:35.935965 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-17 01:02:35.935972 | orchestrator | Tuesday 17 March 2026 00:53:15 +0000 (0:00:01.351) 0:01:16.904 ********* 2026-03-17 01:02:35.935976 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.935980 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.935984 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.935988 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.935991 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.935995 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.935999 | orchestrator | 2026-03-17 01:02:35.936003 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-17 01:02:35.936010 | orchestrator | Tuesday 17 March 2026 00:53:18 +0000 (0:00:02.325) 0:01:19.230 ********* 2026-03-17 01:02:35.936014 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.936018 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.936021 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.936025 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.936029 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.936033 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.936037 | orchestrator | 2026-03-17 01:02:35.936041 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-17 01:02:35.936044 | orchestrator | Tuesday 17 March 2026 00:53:21 +0000 (0:00:03.255) 0:01:22.485 ********* 2026-03-17 01:02:35.936048 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.936052 | orchestrator | 2026-03-17 01:02:35.936056 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-17 01:02:35.936060 | orchestrator | Tuesday 17 March 2026 00:53:22 +0000 (0:00:01.576) 0:01:24.062 ********* 2026-03-17 01:02:35.936064 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936068 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936072 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936075 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936079 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936083 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936087 | orchestrator | 2026-03-17 01:02:35.936091 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-17 01:02:35.936095 | orchestrator | Tuesday 17 March 2026 00:53:23 +0000 (0:00:00.932) 0:01:24.994 ********* 2026-03-17 01:02:35.936098 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936102 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936106 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936110 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936114 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936117 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936122 | orchestrator | 2026-03-17 01:02:35.936125 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-17 01:02:35.936129 | orchestrator | Tuesday 17 March 2026 00:53:24 +0000 (0:00:01.188) 0:01:26.182 ********* 2026-03-17 01:02:35.936133 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 01:02:35.936137 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 01:02:35.936141 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 01:02:35.936145 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 01:02:35.936148 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 01:02:35.936152 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 01:02:35.936156 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 01:02:35.936160 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-17 01:02:35.936169 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 01:02:35.936173 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 01:02:35.936190 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 01:02:35.936194 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-17 01:02:35.936198 | orchestrator | 2026-03-17 01:02:35.936202 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-17 01:02:35.936206 | orchestrator | Tuesday 17 March 2026 00:53:26 +0000 (0:00:01.601) 0:01:27.784 ********* 2026-03-17 01:02:35.936210 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.936214 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.936218 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.936221 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.936225 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.936229 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.936233 | orchestrator | 2026-03-17 01:02:35.936237 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-17 01:02:35.936240 | orchestrator | Tuesday 17 March 2026 00:53:28 +0000 (0:00:01.576) 0:01:29.361 ********* 2026-03-17 01:02:35.936244 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936248 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936252 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936255 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936259 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936263 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936267 | orchestrator | 2026-03-17 01:02:35.936271 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-17 01:02:35.936274 | orchestrator | Tuesday 17 March 2026 00:53:28 +0000 (0:00:00.680) 0:01:30.041 ********* 2026-03-17 01:02:35.936278 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936284 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936290 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936297 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936312 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936319 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936325 | orchestrator | 2026-03-17 01:02:35.936331 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-17 01:02:35.936337 | orchestrator | Tuesday 17 March 2026 00:53:29 +0000 (0:00:00.741) 0:01:30.782 ********* 2026-03-17 01:02:35.936343 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936349 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936354 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936361 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936370 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936375 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936381 | orchestrator | 2026-03-17 01:02:35.936388 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-17 01:02:35.936393 | orchestrator | Tuesday 17 March 2026 00:53:30 +0000 (0:00:00.773) 0:01:31.555 ********* 2026-03-17 01:02:35.936399 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.936405 | orchestrator | 2026-03-17 01:02:35.936411 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-17 01:02:35.936417 | orchestrator | Tuesday 17 March 2026 00:53:31 +0000 (0:00:01.502) 0:01:33.058 ********* 2026-03-17 01:02:35.936422 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.936428 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.936434 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.936440 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.936446 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.936458 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.936465 | orchestrator | 2026-03-17 01:02:35.936471 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-17 01:02:35.936477 | orchestrator | Tuesday 17 March 2026 00:54:21 +0000 (0:00:49.291) 0:02:22.349 ********* 2026-03-17 01:02:35.936484 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 01:02:35.936490 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 01:02:35.936496 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 01:02:35.936502 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936508 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 01:02:35.936514 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 01:02:35.936520 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 01:02:35.936526 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936532 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 01:02:35.936538 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 01:02:35.936544 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 01:02:35.936550 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936556 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 01:02:35.936562 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 01:02:35.936569 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 01:02:35.936575 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936582 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 01:02:35.936588 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 01:02:35.936594 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 01:02:35.936600 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936637 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-17 01:02:35.936646 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-17 01:02:35.936653 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-17 01:02:35.936659 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936666 | orchestrator | 2026-03-17 01:02:35.936672 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-17 01:02:35.936678 | orchestrator | Tuesday 17 March 2026 00:54:21 +0000 (0:00:00.543) 0:02:22.893 ********* 2026-03-17 01:02:35.936683 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936690 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936697 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936706 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936712 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936751 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936758 | orchestrator | 2026-03-17 01:02:35.936765 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-17 01:02:35.936771 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:00.647) 0:02:23.541 ********* 2026-03-17 01:02:35.936778 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936784 | orchestrator | 2026-03-17 01:02:35.936791 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-17 01:02:35.936797 | orchestrator | Tuesday 17 March 2026 00:54:22 +0000 (0:00:00.132) 0:02:23.673 ********* 2026-03-17 01:02:35.936804 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936811 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936827 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936834 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936841 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936847 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936854 | orchestrator | 2026-03-17 01:02:35.936858 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-17 01:02:35.936862 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.688) 0:02:24.361 ********* 2026-03-17 01:02:35.936866 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936870 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936874 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936878 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936881 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936885 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936889 | orchestrator | 2026-03-17 01:02:35.936893 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-17 01:02:35.936901 | orchestrator | Tuesday 17 March 2026 00:54:23 +0000 (0:00:00.613) 0:02:24.975 ********* 2026-03-17 01:02:35.936905 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.936908 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.936912 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.936916 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.936920 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.936924 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.936927 | orchestrator | 2026-03-17 01:02:35.936931 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-17 01:02:35.936935 | orchestrator | Tuesday 17 March 2026 00:54:24 +0000 (0:00:00.709) 0:02:25.684 ********* 2026-03-17 01:02:35.936939 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.936943 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.936947 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.936951 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.936954 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.936960 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.936967 | orchestrator | 2026-03-17 01:02:35.936973 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-17 01:02:35.936979 | orchestrator | Tuesday 17 March 2026 00:54:27 +0000 (0:00:02.812) 0:02:28.497 ********* 2026-03-17 01:02:35.936985 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.936991 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.936998 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.937004 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.937010 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.937017 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.937023 | orchestrator | 2026-03-17 01:02:35.937030 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-17 01:02:35.937036 | orchestrator | Tuesday 17 March 2026 00:54:28 +0000 (0:00:00.798) 0:02:29.296 ********* 2026-03-17 01:02:35.937043 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.937050 | orchestrator | 2026-03-17 01:02:35.937057 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-17 01:02:35.937063 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:00.909) 0:02:30.205 ********* 2026-03-17 01:02:35.937071 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.937075 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.937079 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.937085 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.937091 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.937101 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.937108 | orchestrator | 2026-03-17 01:02:35.937114 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-17 01:02:35.937120 | orchestrator | Tuesday 17 March 2026 00:54:29 +0000 (0:00:00.498) 0:02:30.703 ********* 2026-03-17 01:02:35.937131 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.937137 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.937143 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.937149 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.937155 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.937162 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.937168 | orchestrator | 2026-03-17 01:02:35.937174 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-17 01:02:35.937181 | orchestrator | Tuesday 17 March 2026 00:54:30 +0000 (0:00:00.707) 0:02:31.411 ********* 2026-03-17 01:02:35.937184 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.937188 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.937220 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.937224 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.937228 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.937232 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.937235 | orchestrator | 2026-03-17 01:02:35.937239 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-17 01:02:35.937243 | orchestrator | Tuesday 17 March 2026 00:54:31 +0000 (0:00:00.861) 0:02:32.272 ********* 2026-03-17 01:02:35.937247 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.937251 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.937255 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.937258 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.937262 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.937266 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.937270 | orchestrator | 2026-03-17 01:02:35.937274 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-17 01:02:35.937277 | orchestrator | Tuesday 17 March 2026 00:54:31 +0000 (0:00:00.801) 0:02:33.074 ********* 2026-03-17 01:02:35.937281 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.937285 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.937289 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.937292 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.937296 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.937300 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.937304 | orchestrator | 2026-03-17 01:02:35.937308 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-17 01:02:35.937311 | orchestrator | Tuesday 17 March 2026 00:54:32 +0000 (0:00:00.567) 0:02:33.642 ********* 2026-03-17 01:02:35.937315 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.937319 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.937323 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.937326 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.937330 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.937334 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.937337 | orchestrator | 2026-03-17 01:02:35.937341 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-17 01:02:35.937345 | orchestrator | Tuesday 17 March 2026 00:54:33 +0000 (0:00:00.882) 0:02:34.524 ********* 2026-03-17 01:02:35.937349 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.937352 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.937356 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.937360 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.937364 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.937367 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.937371 | orchestrator | 2026-03-17 01:02:35.937378 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-17 01:02:35.937382 | orchestrator | Tuesday 17 March 2026 00:54:34 +0000 (0:00:00.823) 0:02:35.348 ********* 2026-03-17 01:02:35.937386 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.937390 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.937397 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.937401 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.937404 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.937408 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.937412 | orchestrator | 2026-03-17 01:02:35.937416 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-17 01:02:35.937419 | orchestrator | Tuesday 17 March 2026 00:54:35 +0000 (0:00:00.953) 0:02:36.301 ********* 2026-03-17 01:02:35.937423 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.937427 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.937431 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.937434 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.937438 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.937442 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.937446 | orchestrator | 2026-03-17 01:02:35.937449 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-17 01:02:35.937453 | orchestrator | Tuesday 17 March 2026 00:54:36 +0000 (0:00:01.037) 0:02:37.339 ********* 2026-03-17 01:02:35.937457 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.937462 | orchestrator | 2026-03-17 01:02:35.937465 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-17 01:02:35.937477 | orchestrator | Tuesday 17 March 2026 00:54:37 +0000 (0:00:01.025) 0:02:38.365 ********* 2026-03-17 01:02:35.937481 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-17 01:02:35.937490 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-17 01:02:35.937494 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-17 01:02:35.937498 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-17 01:02:35.937502 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-17 01:02:35.937506 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-17 01:02:35.937509 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-17 01:02:35.937513 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-17 01:02:35.937517 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-17 01:02:35.937521 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-17 01:02:35.937524 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-17 01:02:35.937528 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-17 01:02:35.937532 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-17 01:02:35.937536 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-17 01:02:35.937539 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-17 01:02:35.937543 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-17 01:02:35.937547 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-17 01:02:35.937551 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-17 01:02:35.937567 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-17 01:02:35.937572 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-17 01:02:35.937576 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-17 01:02:35.937579 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-17 01:02:35.937583 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-17 01:02:35.937587 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-17 01:02:35.937591 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-17 01:02:35.937595 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-17 01:02:35.937598 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-17 01:02:35.937602 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-17 01:02:35.937609 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-17 01:02:35.937613 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-17 01:02:35.937617 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-17 01:02:35.937620 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-17 01:02:35.937624 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-17 01:02:35.937628 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-17 01:02:35.937632 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-17 01:02:35.937636 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-17 01:02:35.937639 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-17 01:02:35.937643 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-17 01:02:35.937647 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-17 01:02:35.937651 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-17 01:02:35.937655 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-17 01:02:35.937658 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-17 01:02:35.937662 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-17 01:02:35.937666 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-17 01:02:35.937672 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-17 01:02:35.937676 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-17 01:02:35.937679 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-17 01:02:35.937683 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 01:02:35.937687 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 01:02:35.937691 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-17 01:02:35.937694 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 01:02:35.937698 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 01:02:35.937702 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 01:02:35.937706 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 01:02:35.937709 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 01:02:35.937713 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 01:02:35.937743 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-17 01:02:35.937747 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 01:02:35.937751 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 01:02:35.937755 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 01:02:35.937759 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 01:02:35.937763 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 01:02:35.937767 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-17 01:02:35.937770 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 01:02:35.937774 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 01:02:35.937778 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 01:02:35.937782 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 01:02:35.937785 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 01:02:35.937789 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-17 01:02:35.937796 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 01:02:35.937800 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 01:02:35.937804 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 01:02:35.937808 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 01:02:35.937811 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 01:02:35.937815 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-17 01:02:35.937819 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 01:02:35.937837 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 01:02:35.937842 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 01:02:35.937845 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 01:02:35.937849 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 01:02:35.937853 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-17 01:02:35.937857 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 01:02:35.937861 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-17 01:02:35.937865 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 01:02:35.937868 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-17 01:02:35.937872 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-17 01:02:35.937876 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-17 01:02:35.937880 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-17 01:02:35.937883 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-17 01:02:35.937887 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-17 01:02:35.937891 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-17 01:02:35.937895 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-17 01:02:35.937899 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-17 01:02:35.937902 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-17 01:02:35.937906 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-17 01:02:35.937910 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-17 01:02:35.937914 | orchestrator | 2026-03-17 01:02:35.937918 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-17 01:02:35.937924 | orchestrator | Tuesday 17 March 2026 00:54:44 +0000 (0:00:07.018) 0:02:45.384 ********* 2026-03-17 01:02:35.937930 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.937936 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.937941 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.937951 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.937957 | orchestrator | 2026-03-17 01:02:35.937963 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-17 01:02:35.937969 | orchestrator | Tuesday 17 March 2026 00:54:45 +0000 (0:00:01.145) 0:02:46.530 ********* 2026-03-17 01:02:35.937974 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.937980 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.937986 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.937996 | orchestrator | 2026-03-17 01:02:35.938002 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-17 01:02:35.938008 | orchestrator | Tuesday 17 March 2026 00:54:46 +0000 (0:00:00.860) 0:02:47.390 ********* 2026-03-17 01:02:35.938040 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.938048 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.938054 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.938061 | orchestrator | 2026-03-17 01:02:35.938068 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-17 01:02:35.938073 | orchestrator | Tuesday 17 March 2026 00:54:47 +0000 (0:00:01.782) 0:02:49.173 ********* 2026-03-17 01:02:35.938077 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.938080 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.938084 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.938088 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938092 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938095 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938099 | orchestrator | 2026-03-17 01:02:35.938103 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-17 01:02:35.938107 | orchestrator | Tuesday 17 March 2026 00:54:48 +0000 (0:00:00.726) 0:02:49.899 ********* 2026-03-17 01:02:35.938111 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.938114 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.938118 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.938122 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938126 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938129 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938133 | orchestrator | 2026-03-17 01:02:35.938137 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-17 01:02:35.938141 | orchestrator | Tuesday 17 March 2026 00:54:49 +0000 (0:00:00.638) 0:02:50.537 ********* 2026-03-17 01:02:35.938145 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938148 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938152 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938156 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938160 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938163 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938167 | orchestrator | 2026-03-17 01:02:35.938188 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-17 01:02:35.938193 | orchestrator | Tuesday 17 March 2026 00:54:50 +0000 (0:00:00.803) 0:02:51.341 ********* 2026-03-17 01:02:35.938197 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938201 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938204 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938208 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938212 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938216 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938220 | orchestrator | 2026-03-17 01:02:35.938224 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-17 01:02:35.938227 | orchestrator | Tuesday 17 March 2026 00:54:50 +0000 (0:00:00.723) 0:02:52.065 ********* 2026-03-17 01:02:35.938231 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938235 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938239 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938243 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938246 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938250 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938254 | orchestrator | 2026-03-17 01:02:35.938258 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-17 01:02:35.938266 | orchestrator | Tuesday 17 March 2026 00:54:51 +0000 (0:00:00.894) 0:02:52.959 ********* 2026-03-17 01:02:35.938270 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938274 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938278 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938282 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938285 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938289 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938293 | orchestrator | 2026-03-17 01:02:35.938297 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-17 01:02:35.938301 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:00.486) 0:02:53.445 ********* 2026-03-17 01:02:35.938304 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938308 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938312 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938316 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938321 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938330 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938339 | orchestrator | 2026-03-17 01:02:35.938345 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-17 01:02:35.938355 | orchestrator | Tuesday 17 March 2026 00:54:52 +0000 (0:00:00.603) 0:02:54.049 ********* 2026-03-17 01:02:35.938362 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938368 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938375 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938380 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938384 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938387 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938391 | orchestrator | 2026-03-17 01:02:35.938395 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-17 01:02:35.938399 | orchestrator | Tuesday 17 March 2026 00:54:53 +0000 (0:00:00.545) 0:02:54.594 ********* 2026-03-17 01:02:35.938403 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938406 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938410 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938414 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.938418 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.938421 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.938425 | orchestrator | 2026-03-17 01:02:35.938429 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-17 01:02:35.938433 | orchestrator | Tuesday 17 March 2026 00:54:55 +0000 (0:00:02.580) 0:02:57.175 ********* 2026-03-17 01:02:35.938436 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.938440 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.938444 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.938448 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938452 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938455 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938459 | orchestrator | 2026-03-17 01:02:35.938463 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-17 01:02:35.938467 | orchestrator | Tuesday 17 March 2026 00:54:56 +0000 (0:00:00.562) 0:02:57.738 ********* 2026-03-17 01:02:35.938470 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.938474 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.938478 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938482 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.938485 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938489 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938493 | orchestrator | 2026-03-17 01:02:35.938497 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-17 01:02:35.938500 | orchestrator | Tuesday 17 March 2026 00:54:57 +0000 (0:00:00.942) 0:02:58.680 ********* 2026-03-17 01:02:35.938510 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938514 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938518 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938522 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938526 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938530 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938533 | orchestrator | 2026-03-17 01:02:35.938537 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-17 01:02:35.938541 | orchestrator | Tuesday 17 March 2026 00:54:58 +0000 (0:00:00.775) 0:02:59.455 ********* 2026-03-17 01:02:35.938545 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.938549 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.938553 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.938556 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938575 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938580 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938584 | orchestrator | 2026-03-17 01:02:35.938587 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-17 01:02:35.938591 | orchestrator | Tuesday 17 March 2026 00:54:58 +0000 (0:00:00.689) 0:03:00.145 ********* 2026-03-17 01:02:35.938596 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-17 01:02:35.938602 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-17 01:02:35.938607 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938611 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-17 01:02:35.938615 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-17 01:02:35.938619 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938625 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-17 01:02:35.938629 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-17 01:02:35.938633 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938637 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938641 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938645 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938651 | orchestrator | 2026-03-17 01:02:35.938655 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-17 01:02:35.938659 | orchestrator | Tuesday 17 March 2026 00:54:59 +0000 (0:00:00.542) 0:03:00.687 ********* 2026-03-17 01:02:35.938663 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938666 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938670 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938674 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938678 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938681 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938685 | orchestrator | 2026-03-17 01:02:35.938689 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-17 01:02:35.938693 | orchestrator | Tuesday 17 March 2026 00:55:00 +0000 (0:00:00.738) 0:03:01.425 ********* 2026-03-17 01:02:35.938697 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938700 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938704 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938708 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938712 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938728 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938737 | orchestrator | 2026-03-17 01:02:35.938741 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-17 01:02:35.938745 | orchestrator | Tuesday 17 March 2026 00:55:01 +0000 (0:00:00.764) 0:03:02.190 ********* 2026-03-17 01:02:35.938749 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938753 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938756 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938760 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938764 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938767 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938771 | orchestrator | 2026-03-17 01:02:35.938775 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-17 01:02:35.938779 | orchestrator | Tuesday 17 March 2026 00:55:01 +0000 (0:00:00.847) 0:03:03.038 ********* 2026-03-17 01:02:35.938782 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938786 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938790 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938794 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938797 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938801 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938805 | orchestrator | 2026-03-17 01:02:35.938809 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-17 01:02:35.938826 | orchestrator | Tuesday 17 March 2026 00:55:02 +0000 (0:00:00.495) 0:03:03.533 ********* 2026-03-17 01:02:35.938830 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938834 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.938838 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.938842 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938845 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938849 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938853 | orchestrator | 2026-03-17 01:02:35.938857 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-17 01:02:35.938861 | orchestrator | Tuesday 17 March 2026 00:55:02 +0000 (0:00:00.644) 0:03:04.177 ********* 2026-03-17 01:02:35.938865 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.938869 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.938872 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.938876 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.938880 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.938884 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.938887 | orchestrator | 2026-03-17 01:02:35.938891 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-17 01:02:35.938898 | orchestrator | Tuesday 17 March 2026 00:55:03 +0000 (0:00:00.585) 0:03:04.763 ********* 2026-03-17 01:02:35.938902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.938906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.938910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.938914 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938918 | orchestrator | 2026-03-17 01:02:35.938921 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-17 01:02:35.938925 | orchestrator | Tuesday 17 March 2026 00:55:04 +0000 (0:00:00.525) 0:03:05.289 ********* 2026-03-17 01:02:35.938929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.938933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.938937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.938940 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938944 | orchestrator | 2026-03-17 01:02:35.938948 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-17 01:02:35.938952 | orchestrator | Tuesday 17 March 2026 00:55:04 +0000 (0:00:00.510) 0:03:05.799 ********* 2026-03-17 01:02:35.938956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.938962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.938966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.938970 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.938974 | orchestrator | 2026-03-17 01:02:35.938977 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-17 01:02:35.938981 | orchestrator | Tuesday 17 March 2026 00:55:05 +0000 (0:00:00.649) 0:03:06.448 ********* 2026-03-17 01:02:35.938985 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.938989 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.938993 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.938997 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.939000 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.939004 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.939008 | orchestrator | 2026-03-17 01:02:35.939012 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-17 01:02:35.939016 | orchestrator | Tuesday 17 March 2026 00:55:05 +0000 (0:00:00.551) 0:03:07.000 ********* 2026-03-17 01:02:35.939019 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 01:02:35.939023 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-17 01:02:35.939027 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-17 01:02:35.939031 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.939035 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 01:02:35.939039 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-17 01:02:35.939042 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.939046 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-17 01:02:35.939050 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.939054 | orchestrator | 2026-03-17 01:02:35.939058 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-17 01:02:35.939062 | orchestrator | Tuesday 17 March 2026 00:55:07 +0000 (0:00:01.367) 0:03:08.368 ********* 2026-03-17 01:02:35.939065 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.939069 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.939073 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.939077 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.939080 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.939084 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.939088 | orchestrator | 2026-03-17 01:02:35.939092 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 01:02:35.939096 | orchestrator | Tuesday 17 March 2026 00:55:09 +0000 (0:00:02.421) 0:03:10.789 ********* 2026-03-17 01:02:35.939103 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.939107 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.939110 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.939114 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.939118 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.939122 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.939126 | orchestrator | 2026-03-17 01:02:35.939129 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-17 01:02:35.939133 | orchestrator | Tuesday 17 March 2026 00:55:11 +0000 (0:00:01.619) 0:03:12.408 ********* 2026-03-17 01:02:35.939137 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939141 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.939145 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.939149 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.939152 | orchestrator | 2026-03-17 01:02:35.939156 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-17 01:02:35.939172 | orchestrator | Tuesday 17 March 2026 00:55:12 +0000 (0:00:01.008) 0:03:13.417 ********* 2026-03-17 01:02:35.939176 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.939180 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.939184 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.939188 | orchestrator | 2026-03-17 01:02:35.939191 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-17 01:02:35.939195 | orchestrator | Tuesday 17 March 2026 00:55:12 +0000 (0:00:00.374) 0:03:13.791 ********* 2026-03-17 01:02:35.939199 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.939203 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.939207 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.939210 | orchestrator | 2026-03-17 01:02:35.939214 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-17 01:02:35.939218 | orchestrator | Tuesday 17 March 2026 00:55:13 +0000 (0:00:01.209) 0:03:15.001 ********* 2026-03-17 01:02:35.939222 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 01:02:35.939226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 01:02:35.939230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 01:02:35.939233 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.939237 | orchestrator | 2026-03-17 01:02:35.939241 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-17 01:02:35.939245 | orchestrator | Tuesday 17 March 2026 00:55:14 +0000 (0:00:00.800) 0:03:15.801 ********* 2026-03-17 01:02:35.939248 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.939252 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.939256 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.939260 | orchestrator | 2026-03-17 01:02:35.939264 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-17 01:02:35.939267 | orchestrator | Tuesday 17 March 2026 00:55:14 +0000 (0:00:00.324) 0:03:16.126 ********* 2026-03-17 01:02:35.939271 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.939275 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.939279 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.939283 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.939286 | orchestrator | 2026-03-17 01:02:35.939290 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-17 01:02:35.939294 | orchestrator | Tuesday 17 March 2026 00:55:16 +0000 (0:00:01.208) 0:03:17.334 ********* 2026-03-17 01:02:35.939298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.939304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.939308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.939312 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939318 | orchestrator | 2026-03-17 01:02:35.939322 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-17 01:02:35.939326 | orchestrator | Tuesday 17 March 2026 00:55:16 +0000 (0:00:00.382) 0:03:17.717 ********* 2026-03-17 01:02:35.939330 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939334 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.939337 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.939341 | orchestrator | 2026-03-17 01:02:35.939345 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-17 01:02:35.939349 | orchestrator | Tuesday 17 March 2026 00:55:17 +0000 (0:00:00.506) 0:03:18.223 ********* 2026-03-17 01:02:35.939353 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939356 | orchestrator | 2026-03-17 01:02:35.939360 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-17 01:02:35.939364 | orchestrator | Tuesday 17 March 2026 00:55:17 +0000 (0:00:00.203) 0:03:18.427 ********* 2026-03-17 01:02:35.939368 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939372 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.939375 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.939379 | orchestrator | 2026-03-17 01:02:35.939383 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-17 01:02:35.939387 | orchestrator | Tuesday 17 March 2026 00:55:17 +0000 (0:00:00.330) 0:03:18.757 ********* 2026-03-17 01:02:35.939391 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939394 | orchestrator | 2026-03-17 01:02:35.939398 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-17 01:02:35.939402 | orchestrator | Tuesday 17 March 2026 00:55:17 +0000 (0:00:00.236) 0:03:18.994 ********* 2026-03-17 01:02:35.939406 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939410 | orchestrator | 2026-03-17 01:02:35.939414 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-17 01:02:35.939417 | orchestrator | Tuesday 17 March 2026 00:55:18 +0000 (0:00:00.212) 0:03:19.207 ********* 2026-03-17 01:02:35.939421 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939425 | orchestrator | 2026-03-17 01:02:35.939429 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-17 01:02:35.939432 | orchestrator | Tuesday 17 March 2026 00:55:18 +0000 (0:00:00.137) 0:03:19.344 ********* 2026-03-17 01:02:35.939436 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939440 | orchestrator | 2026-03-17 01:02:35.939444 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-17 01:02:35.939448 | orchestrator | Tuesday 17 March 2026 00:55:18 +0000 (0:00:00.231) 0:03:19.576 ********* 2026-03-17 01:02:35.939451 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939455 | orchestrator | 2026-03-17 01:02:35.939459 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-17 01:02:35.939463 | orchestrator | Tuesday 17 March 2026 00:55:18 +0000 (0:00:00.240) 0:03:19.816 ********* 2026-03-17 01:02:35.939467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.939471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.939474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.939478 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939482 | orchestrator | 2026-03-17 01:02:35.939487 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-17 01:02:35.939509 | orchestrator | Tuesday 17 March 2026 00:55:19 +0000 (0:00:00.435) 0:03:20.252 ********* 2026-03-17 01:02:35.939516 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939523 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.939529 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.939533 | orchestrator | 2026-03-17 01:02:35.939537 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-17 01:02:35.939541 | orchestrator | Tuesday 17 March 2026 00:55:19 +0000 (0:00:00.514) 0:03:20.767 ********* 2026-03-17 01:02:35.939548 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939552 | orchestrator | 2026-03-17 01:02:35.939556 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-17 01:02:35.939560 | orchestrator | Tuesday 17 March 2026 00:55:19 +0000 (0:00:00.214) 0:03:20.982 ********* 2026-03-17 01:02:35.939563 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939567 | orchestrator | 2026-03-17 01:02:35.939571 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-17 01:02:35.939575 | orchestrator | Tuesday 17 March 2026 00:55:19 +0000 (0:00:00.200) 0:03:21.182 ********* 2026-03-17 01:02:35.939579 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.939583 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.939586 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.939590 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.939594 | orchestrator | 2026-03-17 01:02:35.939598 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-17 01:02:35.939602 | orchestrator | Tuesday 17 March 2026 00:55:20 +0000 (0:00:00.720) 0:03:21.902 ********* 2026-03-17 01:02:35.939606 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.939609 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.939613 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.939617 | orchestrator | 2026-03-17 01:02:35.939621 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-17 01:02:35.939625 | orchestrator | Tuesday 17 March 2026 00:55:21 +0000 (0:00:00.365) 0:03:22.268 ********* 2026-03-17 01:02:35.939629 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.939633 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.939636 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.939640 | orchestrator | 2026-03-17 01:02:35.939644 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-17 01:02:35.939648 | orchestrator | Tuesday 17 March 2026 00:55:22 +0000 (0:00:01.166) 0:03:23.434 ********* 2026-03-17 01:02:35.939654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.939658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.939662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.939666 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939669 | orchestrator | 2026-03-17 01:02:35.939673 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-17 01:02:35.939677 | orchestrator | Tuesday 17 March 2026 00:55:22 +0000 (0:00:00.528) 0:03:23.963 ********* 2026-03-17 01:02:35.939681 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.939685 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.939689 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.939692 | orchestrator | 2026-03-17 01:02:35.939696 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-17 01:02:35.939700 | orchestrator | Tuesday 17 March 2026 00:55:23 +0000 (0:00:00.253) 0:03:24.216 ********* 2026-03-17 01:02:35.939704 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.939708 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.939712 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.939728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.939735 | orchestrator | 2026-03-17 01:02:35.939741 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-17 01:02:35.939747 | orchestrator | Tuesday 17 March 2026 00:55:23 +0000 (0:00:00.804) 0:03:25.021 ********* 2026-03-17 01:02:35.939755 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.939761 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.939767 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.939773 | orchestrator | 2026-03-17 01:02:35.939780 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-17 01:02:35.939784 | orchestrator | Tuesday 17 March 2026 00:55:24 +0000 (0:00:00.295) 0:03:25.316 ********* 2026-03-17 01:02:35.939790 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.939794 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.939798 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.939801 | orchestrator | 2026-03-17 01:02:35.939805 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-17 01:02:35.939809 | orchestrator | Tuesday 17 March 2026 00:55:25 +0000 (0:00:01.250) 0:03:26.566 ********* 2026-03-17 01:02:35.939813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.939817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.939820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.939824 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939828 | orchestrator | 2026-03-17 01:02:35.939832 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-17 01:02:35.939836 | orchestrator | Tuesday 17 March 2026 00:55:25 +0000 (0:00:00.570) 0:03:27.136 ********* 2026-03-17 01:02:35.939840 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.939843 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.939847 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.939851 | orchestrator | 2026-03-17 01:02:35.939855 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-17 01:02:35.939859 | orchestrator | Tuesday 17 March 2026 00:55:26 +0000 (0:00:00.291) 0:03:27.428 ********* 2026-03-17 01:02:35.939863 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939866 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.939870 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.939874 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.939878 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.939897 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.939901 | orchestrator | 2026-03-17 01:02:35.939905 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-17 01:02:35.939909 | orchestrator | Tuesday 17 March 2026 00:55:26 +0000 (0:00:00.483) 0:03:27.911 ********* 2026-03-17 01:02:35.939913 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.939917 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.939920 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.939924 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.939929 | orchestrator | 2026-03-17 01:02:35.939935 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-17 01:02:35.939943 | orchestrator | Tuesday 17 March 2026 00:55:27 +0000 (0:00:00.845) 0:03:28.756 ********* 2026-03-17 01:02:35.939951 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.939958 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.939965 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.939971 | orchestrator | 2026-03-17 01:02:35.939977 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-17 01:02:35.939983 | orchestrator | Tuesday 17 March 2026 00:55:27 +0000 (0:00:00.255) 0:03:29.012 ********* 2026-03-17 01:02:35.939989 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.939994 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.940000 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.940006 | orchestrator | 2026-03-17 01:02:35.940013 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-17 01:02:35.940020 | orchestrator | Tuesday 17 March 2026 00:55:29 +0000 (0:00:01.350) 0:03:30.362 ********* 2026-03-17 01:02:35.940027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 01:02:35.940034 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 01:02:35.940040 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 01:02:35.940047 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940054 | orchestrator | 2026-03-17 01:02:35.940062 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-17 01:02:35.940065 | orchestrator | Tuesday 17 March 2026 00:55:29 +0000 (0:00:00.589) 0:03:30.952 ********* 2026-03-17 01:02:35.940069 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940073 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940077 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940081 | orchestrator | 2026-03-17 01:02:35.940084 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-17 01:02:35.940088 | orchestrator | 2026-03-17 01:02:35.940095 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 01:02:35.940099 | orchestrator | Tuesday 17 March 2026 00:55:30 +0000 (0:00:00.677) 0:03:31.629 ********* 2026-03-17 01:02:35.940103 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.940107 | orchestrator | 2026-03-17 01:02:35.940111 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 01:02:35.940115 | orchestrator | Tuesday 17 March 2026 00:55:31 +0000 (0:00:00.730) 0:03:32.360 ********* 2026-03-17 01:02:35.940118 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.940122 | orchestrator | 2026-03-17 01:02:35.940126 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 01:02:35.940130 | orchestrator | Tuesday 17 March 2026 00:55:31 +0000 (0:00:00.489) 0:03:32.850 ********* 2026-03-17 01:02:35.940134 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940138 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940141 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940145 | orchestrator | 2026-03-17 01:02:35.940149 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 01:02:35.940153 | orchestrator | Tuesday 17 March 2026 00:55:32 +0000 (0:00:00.786) 0:03:33.636 ********* 2026-03-17 01:02:35.940157 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940160 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940164 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940168 | orchestrator | 2026-03-17 01:02:35.940172 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 01:02:35.940176 | orchestrator | Tuesday 17 March 2026 00:55:32 +0000 (0:00:00.264) 0:03:33.901 ********* 2026-03-17 01:02:35.940179 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940183 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940187 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940191 | orchestrator | 2026-03-17 01:02:35.940195 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 01:02:35.940198 | orchestrator | Tuesday 17 March 2026 00:55:33 +0000 (0:00:00.462) 0:03:34.363 ********* 2026-03-17 01:02:35.940202 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940206 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940210 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940214 | orchestrator | 2026-03-17 01:02:35.940218 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 01:02:35.940221 | orchestrator | Tuesday 17 March 2026 00:55:33 +0000 (0:00:00.323) 0:03:34.687 ********* 2026-03-17 01:02:35.940225 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940229 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940233 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940237 | orchestrator | 2026-03-17 01:02:35.940241 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 01:02:35.940244 | orchestrator | Tuesday 17 March 2026 00:55:34 +0000 (0:00:00.794) 0:03:35.482 ********* 2026-03-17 01:02:35.940248 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940252 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940256 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940260 | orchestrator | 2026-03-17 01:02:35.940263 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 01:02:35.940270 | orchestrator | Tuesday 17 March 2026 00:55:34 +0000 (0:00:00.300) 0:03:35.783 ********* 2026-03-17 01:02:35.940290 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940295 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940298 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940302 | orchestrator | 2026-03-17 01:02:35.940306 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 01:02:35.940310 | orchestrator | Tuesday 17 March 2026 00:55:34 +0000 (0:00:00.383) 0:03:36.166 ********* 2026-03-17 01:02:35.940314 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940317 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940321 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940325 | orchestrator | 2026-03-17 01:02:35.940329 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 01:02:35.940333 | orchestrator | Tuesday 17 March 2026 00:55:35 +0000 (0:00:00.620) 0:03:36.786 ********* 2026-03-17 01:02:35.940336 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940340 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940344 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940348 | orchestrator | 2026-03-17 01:02:35.940352 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 01:02:35.940356 | orchestrator | Tuesday 17 March 2026 00:55:36 +0000 (0:00:00.654) 0:03:37.441 ********* 2026-03-17 01:02:35.940359 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940363 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940367 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940371 | orchestrator | 2026-03-17 01:02:35.940375 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 01:02:35.940378 | orchestrator | Tuesday 17 March 2026 00:55:36 +0000 (0:00:00.289) 0:03:37.731 ********* 2026-03-17 01:02:35.940382 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940386 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940390 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940394 | orchestrator | 2026-03-17 01:02:35.940397 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 01:02:35.940401 | orchestrator | Tuesday 17 March 2026 00:55:37 +0000 (0:00:00.583) 0:03:38.315 ********* 2026-03-17 01:02:35.940405 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940409 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940413 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940416 | orchestrator | 2026-03-17 01:02:35.940420 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 01:02:35.940424 | orchestrator | Tuesday 17 March 2026 00:55:37 +0000 (0:00:00.322) 0:03:38.637 ********* 2026-03-17 01:02:35.940428 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940432 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940438 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940442 | orchestrator | 2026-03-17 01:02:35.940448 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 01:02:35.940454 | orchestrator | Tuesday 17 March 2026 00:55:37 +0000 (0:00:00.318) 0:03:38.956 ********* 2026-03-17 01:02:35.940463 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940470 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940476 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940482 | orchestrator | 2026-03-17 01:02:35.940488 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 01:02:35.940494 | orchestrator | Tuesday 17 March 2026 00:55:38 +0000 (0:00:00.315) 0:03:39.271 ********* 2026-03-17 01:02:35.940499 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940505 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940511 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940516 | orchestrator | 2026-03-17 01:02:35.940521 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 01:02:35.940527 | orchestrator | Tuesday 17 March 2026 00:55:38 +0000 (0:00:00.538) 0:03:39.810 ********* 2026-03-17 01:02:35.940540 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940547 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.940553 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.940558 | orchestrator | 2026-03-17 01:02:35.940564 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 01:02:35.940570 | orchestrator | Tuesday 17 March 2026 00:55:38 +0000 (0:00:00.301) 0:03:40.111 ********* 2026-03-17 01:02:35.940576 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940581 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940587 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940593 | orchestrator | 2026-03-17 01:02:35.940599 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 01:02:35.940605 | orchestrator | Tuesday 17 March 2026 00:55:39 +0000 (0:00:00.321) 0:03:40.433 ********* 2026-03-17 01:02:35.940612 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940619 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940626 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940632 | orchestrator | 2026-03-17 01:02:35.940639 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 01:02:35.940644 | orchestrator | Tuesday 17 March 2026 00:55:39 +0000 (0:00:00.388) 0:03:40.822 ********* 2026-03-17 01:02:35.940647 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940651 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940655 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940659 | orchestrator | 2026-03-17 01:02:35.940662 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-17 01:02:35.940666 | orchestrator | Tuesday 17 March 2026 00:55:40 +0000 (0:00:00.978) 0:03:41.801 ********* 2026-03-17 01:02:35.940670 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940674 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940677 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940681 | orchestrator | 2026-03-17 01:02:35.940685 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-17 01:02:35.940689 | orchestrator | Tuesday 17 March 2026 00:55:40 +0000 (0:00:00.319) 0:03:42.120 ********* 2026-03-17 01:02:35.940693 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.940697 | orchestrator | 2026-03-17 01:02:35.940701 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-17 01:02:35.940704 | orchestrator | Tuesday 17 March 2026 00:55:41 +0000 (0:00:00.692) 0:03:42.812 ********* 2026-03-17 01:02:35.940708 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.940713 | orchestrator | 2026-03-17 01:02:35.940773 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-17 01:02:35.940783 | orchestrator | Tuesday 17 March 2026 00:55:42 +0000 (0:00:00.384) 0:03:43.197 ********* 2026-03-17 01:02:35.940789 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-17 01:02:35.940796 | orchestrator | 2026-03-17 01:02:35.940802 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-17 01:02:35.940808 | orchestrator | Tuesday 17 March 2026 00:55:43 +0000 (0:00:01.094) 0:03:44.291 ********* 2026-03-17 01:02:35.940814 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940820 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940823 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940827 | orchestrator | 2026-03-17 01:02:35.940831 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-17 01:02:35.940835 | orchestrator | Tuesday 17 March 2026 00:55:43 +0000 (0:00:00.334) 0:03:44.626 ********* 2026-03-17 01:02:35.940839 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940842 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940846 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940850 | orchestrator | 2026-03-17 01:02:35.940854 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-17 01:02:35.940858 | orchestrator | Tuesday 17 March 2026 00:55:43 +0000 (0:00:00.285) 0:03:44.912 ********* 2026-03-17 01:02:35.940866 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.940870 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.940873 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.940877 | orchestrator | 2026-03-17 01:02:35.940881 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-17 01:02:35.940885 | orchestrator | Tuesday 17 March 2026 00:55:44 +0000 (0:00:01.188) 0:03:46.101 ********* 2026-03-17 01:02:35.940889 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.940892 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.940896 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.940900 | orchestrator | 2026-03-17 01:02:35.940904 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-17 01:02:35.940908 | orchestrator | Tuesday 17 March 2026 00:55:45 +0000 (0:00:00.948) 0:03:47.049 ********* 2026-03-17 01:02:35.940911 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.940915 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.940919 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.940923 | orchestrator | 2026-03-17 01:02:35.940926 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-17 01:02:35.940930 | orchestrator | Tuesday 17 March 2026 00:55:46 +0000 (0:00:00.703) 0:03:47.753 ********* 2026-03-17 01:02:35.940938 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940942 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.940945 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.940949 | orchestrator | 2026-03-17 01:02:35.940953 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-17 01:02:35.940957 | orchestrator | Tuesday 17 March 2026 00:55:47 +0000 (0:00:00.741) 0:03:48.494 ********* 2026-03-17 01:02:35.940961 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.940965 | orchestrator | 2026-03-17 01:02:35.940968 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-17 01:02:35.940972 | orchestrator | Tuesday 17 March 2026 00:55:48 +0000 (0:00:01.123) 0:03:49.617 ********* 2026-03-17 01:02:35.940976 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.940980 | orchestrator | 2026-03-17 01:02:35.940984 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-17 01:02:35.940987 | orchestrator | Tuesday 17 March 2026 00:55:49 +0000 (0:00:00.606) 0:03:50.223 ********* 2026-03-17 01:02:35.940991 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:02:35.940995 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.940999 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.941003 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:02:35.941006 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-17 01:02:35.941010 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:02:35.941014 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:02:35.941018 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-17 01:02:35.941022 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:02:35.941025 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-17 01:02:35.941029 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-17 01:02:35.941033 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-17 01:02:35.941037 | orchestrator | 2026-03-17 01:02:35.941041 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-17 01:02:35.941044 | orchestrator | Tuesday 17 March 2026 00:55:52 +0000 (0:00:03.707) 0:03:53.931 ********* 2026-03-17 01:02:35.941048 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.941052 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.941056 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.941060 | orchestrator | 2026-03-17 01:02:35.941068 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-17 01:02:35.941072 | orchestrator | Tuesday 17 March 2026 00:55:54 +0000 (0:00:01.365) 0:03:55.297 ********* 2026-03-17 01:02:35.941075 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941079 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941083 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941087 | orchestrator | 2026-03-17 01:02:35.941091 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-17 01:02:35.941094 | orchestrator | Tuesday 17 March 2026 00:55:54 +0000 (0:00:00.298) 0:03:55.596 ********* 2026-03-17 01:02:35.941098 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941102 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941106 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941109 | orchestrator | 2026-03-17 01:02:35.941113 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-17 01:02:35.941117 | orchestrator | Tuesday 17 March 2026 00:55:54 +0000 (0:00:00.324) 0:03:55.920 ********* 2026-03-17 01:02:35.941121 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.941139 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.941143 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.941147 | orchestrator | 2026-03-17 01:02:35.941151 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-17 01:02:35.941155 | orchestrator | Tuesday 17 March 2026 00:55:56 +0000 (0:00:01.810) 0:03:57.731 ********* 2026-03-17 01:02:35.941159 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.941163 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.941173 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.941177 | orchestrator | 2026-03-17 01:02:35.941186 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-17 01:02:35.941190 | orchestrator | Tuesday 17 March 2026 00:55:58 +0000 (0:00:01.508) 0:03:59.240 ********* 2026-03-17 01:02:35.941194 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941197 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941201 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941205 | orchestrator | 2026-03-17 01:02:35.941209 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-17 01:02:35.941213 | orchestrator | Tuesday 17 March 2026 00:55:58 +0000 (0:00:00.350) 0:03:59.590 ********* 2026-03-17 01:02:35.941216 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.941220 | orchestrator | 2026-03-17 01:02:35.941224 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-17 01:02:35.941228 | orchestrator | Tuesday 17 March 2026 00:55:58 +0000 (0:00:00.484) 0:04:00.075 ********* 2026-03-17 01:02:35.941232 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941236 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941239 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941243 | orchestrator | 2026-03-17 01:02:35.941247 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-17 01:02:35.941251 | orchestrator | Tuesday 17 March 2026 00:55:59 +0000 (0:00:00.539) 0:04:00.614 ********* 2026-03-17 01:02:35.941255 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941259 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941262 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941266 | orchestrator | 2026-03-17 01:02:35.941270 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-17 01:02:35.941274 | orchestrator | Tuesday 17 March 2026 00:55:59 +0000 (0:00:00.296) 0:04:00.910 ********* 2026-03-17 01:02:35.941282 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.941286 | orchestrator | 2026-03-17 01:02:35.941290 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-17 01:02:35.941294 | orchestrator | Tuesday 17 March 2026 00:56:00 +0000 (0:00:00.580) 0:04:01.491 ********* 2026-03-17 01:02:35.941301 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.941304 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.941308 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.941312 | orchestrator | 2026-03-17 01:02:35.941316 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-17 01:02:35.941320 | orchestrator | Tuesday 17 March 2026 00:56:02 +0000 (0:00:02.260) 0:04:03.752 ********* 2026-03-17 01:02:35.941324 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.941327 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.941331 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.941335 | orchestrator | 2026-03-17 01:02:35.941339 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-17 01:02:35.941343 | orchestrator | Tuesday 17 March 2026 00:56:03 +0000 (0:00:01.196) 0:04:04.949 ********* 2026-03-17 01:02:35.941346 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.941350 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.941354 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.941358 | orchestrator | 2026-03-17 01:02:35.941362 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-17 01:02:35.941365 | orchestrator | Tuesday 17 March 2026 00:56:05 +0000 (0:00:01.567) 0:04:06.516 ********* 2026-03-17 01:02:35.941369 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.941373 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.941377 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.941381 | orchestrator | 2026-03-17 01:02:35.941385 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-17 01:02:35.941388 | orchestrator | Tuesday 17 March 2026 00:56:07 +0000 (0:00:02.307) 0:04:08.824 ********* 2026-03-17 01:02:35.941392 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.941396 | orchestrator | 2026-03-17 01:02:35.941400 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-17 01:02:35.941404 | orchestrator | Tuesday 17 March 2026 00:56:08 +0000 (0:00:00.920) 0:04:09.745 ********* 2026-03-17 01:02:35.941407 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-17 01:02:35.941411 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941415 | orchestrator | 2026-03-17 01:02:35.941419 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-17 01:02:35.941423 | orchestrator | Tuesday 17 March 2026 00:56:30 +0000 (0:00:21.856) 0:04:31.602 ********* 2026-03-17 01:02:35.941427 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941430 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941434 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941438 | orchestrator | 2026-03-17 01:02:35.941442 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-17 01:02:35.941446 | orchestrator | Tuesday 17 March 2026 00:56:39 +0000 (0:00:09.068) 0:04:40.670 ********* 2026-03-17 01:02:35.941450 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941453 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941457 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941461 | orchestrator | 2026-03-17 01:02:35.941465 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-17 01:02:35.941480 | orchestrator | Tuesday 17 March 2026 00:56:39 +0000 (0:00:00.298) 0:04:40.968 ********* 2026-03-17 01:02:35.941485 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6306392d91f95ce791391f9b2421d9d383f7e75e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-17 01:02:35.941491 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6306392d91f95ce791391f9b2421d9d383f7e75e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-17 01:02:35.941498 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6306392d91f95ce791391f9b2421d9d383f7e75e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-17 01:02:35.941503 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6306392d91f95ce791391f9b2421d9d383f7e75e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-17 01:02:35.941510 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6306392d91f95ce791391f9b2421d9d383f7e75e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-17 01:02:35.941515 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6306392d91f95ce791391f9b2421d9d383f7e75e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__6306392d91f95ce791391f9b2421d9d383f7e75e'}])  2026-03-17 01:02:35.941520 | orchestrator | 2026-03-17 01:02:35.941524 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 01:02:35.941527 | orchestrator | Tuesday 17 March 2026 00:56:54 +0000 (0:00:14.758) 0:04:55.727 ********* 2026-03-17 01:02:35.941531 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941535 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941539 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941543 | orchestrator | 2026-03-17 01:02:35.941546 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-17 01:02:35.941550 | orchestrator | Tuesday 17 March 2026 00:56:54 +0000 (0:00:00.276) 0:04:56.004 ********* 2026-03-17 01:02:35.941554 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.941558 | orchestrator | 2026-03-17 01:02:35.941562 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-17 01:02:35.941565 | orchestrator | Tuesday 17 March 2026 00:56:55 +0000 (0:00:00.461) 0:04:56.465 ********* 2026-03-17 01:02:35.941569 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941573 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941577 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941581 | orchestrator | 2026-03-17 01:02:35.941585 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-17 01:02:35.941588 | orchestrator | Tuesday 17 March 2026 00:56:55 +0000 (0:00:00.439) 0:04:56.904 ********* 2026-03-17 01:02:35.941592 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941596 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941600 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941603 | orchestrator | 2026-03-17 01:02:35.941607 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-17 01:02:35.941611 | orchestrator | Tuesday 17 March 2026 00:56:56 +0000 (0:00:00.285) 0:04:57.190 ********* 2026-03-17 01:02:35.941615 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 01:02:35.941619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 01:02:35.941626 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 01:02:35.941629 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941633 | orchestrator | 2026-03-17 01:02:35.941637 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-17 01:02:35.941641 | orchestrator | Tuesday 17 March 2026 00:56:56 +0000 (0:00:00.452) 0:04:57.642 ********* 2026-03-17 01:02:35.941645 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941649 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941664 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941668 | orchestrator | 2026-03-17 01:02:35.941672 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-17 01:02:35.941676 | orchestrator | 2026-03-17 01:02:35.941680 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 01:02:35.941684 | orchestrator | Tuesday 17 March 2026 00:56:57 +0000 (0:00:00.669) 0:04:58.312 ********* 2026-03-17 01:02:35.941687 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.941691 | orchestrator | 2026-03-17 01:02:35.941695 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 01:02:35.941699 | orchestrator | Tuesday 17 March 2026 00:56:57 +0000 (0:00:00.436) 0:04:58.748 ********* 2026-03-17 01:02:35.941703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.941707 | orchestrator | 2026-03-17 01:02:35.941711 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 01:02:35.941728 | orchestrator | Tuesday 17 March 2026 00:56:58 +0000 (0:00:00.446) 0:04:59.195 ********* 2026-03-17 01:02:35.941734 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941737 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941741 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941745 | orchestrator | 2026-03-17 01:02:35.941749 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 01:02:35.941753 | orchestrator | Tuesday 17 March 2026 00:56:58 +0000 (0:00:00.871) 0:05:00.066 ********* 2026-03-17 01:02:35.941756 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941760 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941764 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941768 | orchestrator | 2026-03-17 01:02:35.941772 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 01:02:35.941775 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.253) 0:05:00.321 ********* 2026-03-17 01:02:35.941779 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941783 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941787 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941791 | orchestrator | 2026-03-17 01:02:35.941795 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 01:02:35.941798 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.225) 0:05:00.547 ********* 2026-03-17 01:02:35.941805 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941809 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941812 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941816 | orchestrator | 2026-03-17 01:02:35.941820 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 01:02:35.941824 | orchestrator | Tuesday 17 March 2026 00:56:59 +0000 (0:00:00.252) 0:05:00.799 ********* 2026-03-17 01:02:35.941828 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941831 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941835 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941839 | orchestrator | 2026-03-17 01:02:35.941843 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 01:02:35.941847 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:00.866) 0:05:01.666 ********* 2026-03-17 01:02:35.941854 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941858 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941861 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941865 | orchestrator | 2026-03-17 01:02:35.941869 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 01:02:35.941873 | orchestrator | Tuesday 17 March 2026 00:57:00 +0000 (0:00:00.296) 0:05:01.963 ********* 2026-03-17 01:02:35.941877 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941881 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941884 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941888 | orchestrator | 2026-03-17 01:02:35.941892 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 01:02:35.941896 | orchestrator | Tuesday 17 March 2026 00:57:01 +0000 (0:00:00.282) 0:05:02.245 ********* 2026-03-17 01:02:35.941899 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941903 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941907 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941911 | orchestrator | 2026-03-17 01:02:35.941915 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 01:02:35.941918 | orchestrator | Tuesday 17 March 2026 00:57:01 +0000 (0:00:00.750) 0:05:02.996 ********* 2026-03-17 01:02:35.941922 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941926 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941930 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941934 | orchestrator | 2026-03-17 01:02:35.941937 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 01:02:35.941941 | orchestrator | Tuesday 17 March 2026 00:57:02 +0000 (0:00:01.009) 0:05:04.005 ********* 2026-03-17 01:02:35.941945 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941949 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941953 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.941956 | orchestrator | 2026-03-17 01:02:35.941960 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 01:02:35.941964 | orchestrator | Tuesday 17 March 2026 00:57:03 +0000 (0:00:00.288) 0:05:04.294 ********* 2026-03-17 01:02:35.941968 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.941971 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.941975 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.941979 | orchestrator | 2026-03-17 01:02:35.941983 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 01:02:35.941987 | orchestrator | Tuesday 17 March 2026 00:57:03 +0000 (0:00:00.352) 0:05:04.646 ********* 2026-03-17 01:02:35.941990 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.941994 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.941998 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942002 | orchestrator | 2026-03-17 01:02:35.942006 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 01:02:35.942050 | orchestrator | Tuesday 17 March 2026 00:57:03 +0000 (0:00:00.291) 0:05:04.938 ********* 2026-03-17 01:02:35.942056 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942060 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942064 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942068 | orchestrator | 2026-03-17 01:02:35.942072 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 01:02:35.942075 | orchestrator | Tuesday 17 March 2026 00:57:04 +0000 (0:00:00.464) 0:05:05.403 ********* 2026-03-17 01:02:35.942079 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942083 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942087 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942091 | orchestrator | 2026-03-17 01:02:35.942095 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 01:02:35.942098 | orchestrator | Tuesday 17 March 2026 00:57:04 +0000 (0:00:00.280) 0:05:05.683 ********* 2026-03-17 01:02:35.942102 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942106 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942113 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942116 | orchestrator | 2026-03-17 01:02:35.942120 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 01:02:35.942124 | orchestrator | Tuesday 17 March 2026 00:57:04 +0000 (0:00:00.271) 0:05:05.955 ********* 2026-03-17 01:02:35.942128 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942132 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942136 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942139 | orchestrator | 2026-03-17 01:02:35.942143 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 01:02:35.942147 | orchestrator | Tuesday 17 March 2026 00:57:05 +0000 (0:00:00.287) 0:05:06.242 ********* 2026-03-17 01:02:35.942151 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.942155 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.942159 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.942163 | orchestrator | 2026-03-17 01:02:35.942166 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 01:02:35.942170 | orchestrator | Tuesday 17 March 2026 00:57:05 +0000 (0:00:00.273) 0:05:06.516 ********* 2026-03-17 01:02:35.942174 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.942178 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.942182 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.942186 | orchestrator | 2026-03-17 01:02:35.942190 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 01:02:35.942193 | orchestrator | Tuesday 17 March 2026 00:57:05 +0000 (0:00:00.446) 0:05:06.962 ********* 2026-03-17 01:02:35.942197 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.942203 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.942207 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.942211 | orchestrator | 2026-03-17 01:02:35.942215 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-17 01:02:35.942219 | orchestrator | Tuesday 17 March 2026 00:57:06 +0000 (0:00:00.407) 0:05:07.370 ********* 2026-03-17 01:02:35.942223 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 01:02:35.942226 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:02:35.942230 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:02:35.942234 | orchestrator | 2026-03-17 01:02:35.942238 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-17 01:02:35.942242 | orchestrator | Tuesday 17 March 2026 00:57:06 +0000 (0:00:00.638) 0:05:08.008 ********* 2026-03-17 01:02:35.942246 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.942249 | orchestrator | 2026-03-17 01:02:35.942253 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-17 01:02:35.942257 | orchestrator | Tuesday 17 March 2026 00:57:07 +0000 (0:00:00.518) 0:05:08.526 ********* 2026-03-17 01:02:35.942261 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.942265 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.942268 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.942272 | orchestrator | 2026-03-17 01:02:35.942276 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-17 01:02:35.942280 | orchestrator | Tuesday 17 March 2026 00:57:08 +0000 (0:00:00.718) 0:05:09.245 ********* 2026-03-17 01:02:35.942284 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942288 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942292 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942295 | orchestrator | 2026-03-17 01:02:35.942299 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-17 01:02:35.942303 | orchestrator | Tuesday 17 March 2026 00:57:08 +0000 (0:00:00.259) 0:05:09.505 ********* 2026-03-17 01:02:35.942307 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:02:35.942311 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:02:35.942317 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:02:35.942321 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-17 01:02:35.942325 | orchestrator | 2026-03-17 01:02:35.942329 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-17 01:02:35.942333 | orchestrator | Tuesday 17 March 2026 00:57:18 +0000 (0:00:09.826) 0:05:19.331 ********* 2026-03-17 01:02:35.942336 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.942340 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.942344 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.942348 | orchestrator | 2026-03-17 01:02:35.942352 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-17 01:02:35.942356 | orchestrator | Tuesday 17 March 2026 00:57:18 +0000 (0:00:00.433) 0:05:19.764 ********* 2026-03-17 01:02:35.942359 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-17 01:02:35.942363 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 01:02:35.942367 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 01:02:35.942371 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.942375 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-17 01:02:35.942391 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.942395 | orchestrator | 2026-03-17 01:02:35.942399 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-17 01:02:35.942403 | orchestrator | Tuesday 17 March 2026 00:57:20 +0000 (0:00:01.813) 0:05:21.578 ********* 2026-03-17 01:02:35.942407 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-17 01:02:35.942410 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 01:02:35.942414 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 01:02:35.942418 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:02:35.942422 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-17 01:02:35.942426 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-17 01:02:35.942430 | orchestrator | 2026-03-17 01:02:35.942433 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-17 01:02:35.942437 | orchestrator | Tuesday 17 March 2026 00:57:21 +0000 (0:00:01.077) 0:05:22.655 ********* 2026-03-17 01:02:35.942441 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.942445 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.942449 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.942453 | orchestrator | 2026-03-17 01:02:35.942457 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-17 01:02:35.942460 | orchestrator | Tuesday 17 March 2026 00:57:22 +0000 (0:00:00.771) 0:05:23.427 ********* 2026-03-17 01:02:35.942464 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942468 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942472 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942476 | orchestrator | 2026-03-17 01:02:35.942480 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-17 01:02:35.942483 | orchestrator | Tuesday 17 March 2026 00:57:22 +0000 (0:00:00.463) 0:05:23.890 ********* 2026-03-17 01:02:35.942487 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942491 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942495 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942499 | orchestrator | 2026-03-17 01:02:35.942503 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-17 01:02:35.942506 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.347) 0:05:24.237 ********* 2026-03-17 01:02:35.942510 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-03-17 01:02:35.942514 | orchestrator | 2026-03-17 01:02:35.942520 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-17 01:02:35.942524 | orchestrator | Tuesday 17 March 2026 00:57:23 +0000 (0:00:00.573) 0:05:24.811 ********* 2026-03-17 01:02:35.942532 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942536 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942540 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942544 | orchestrator | 2026-03-17 01:02:35.942548 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-17 01:02:35.942552 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:00.663) 0:05:25.474 ********* 2026-03-17 01:02:35.942556 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942559 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942563 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.942567 | orchestrator | 2026-03-17 01:02:35.942571 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-17 01:02:35.942575 | orchestrator | Tuesday 17 March 2026 00:57:24 +0000 (0:00:00.614) 0:05:26.089 ********* 2026-03-17 01:02:35.942579 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.942582 | orchestrator | 2026-03-17 01:02:35.942586 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-17 01:02:35.942590 | orchestrator | Tuesday 17 March 2026 00:57:25 +0000 (0:00:00.478) 0:05:26.568 ********* 2026-03-17 01:02:35.942594 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.942598 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.942602 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.942605 | orchestrator | 2026-03-17 01:02:35.942609 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-17 01:02:35.942613 | orchestrator | Tuesday 17 March 2026 00:57:26 +0000 (0:00:01.223) 0:05:27.792 ********* 2026-03-17 01:02:35.942617 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.942621 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.942625 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.942629 | orchestrator | 2026-03-17 01:02:35.942632 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-17 01:02:35.942636 | orchestrator | Tuesday 17 March 2026 00:57:27 +0000 (0:00:01.239) 0:05:29.031 ********* 2026-03-17 01:02:35.942640 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.942644 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.942648 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.942651 | orchestrator | 2026-03-17 01:02:35.942655 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-17 01:02:35.942659 | orchestrator | Tuesday 17 March 2026 00:57:29 +0000 (0:00:01.565) 0:05:30.597 ********* 2026-03-17 01:02:35.942663 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.942667 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.942671 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.942675 | orchestrator | 2026-03-17 01:02:35.942678 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-17 01:02:35.942682 | orchestrator | Tuesday 17 March 2026 00:57:31 +0000 (0:00:02.424) 0:05:33.022 ********* 2026-03-17 01:02:35.942686 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942690 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.942694 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-17 01:02:35.942698 | orchestrator | 2026-03-17 01:02:35.942701 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-17 01:02:35.942705 | orchestrator | Tuesday 17 March 2026 00:57:32 +0000 (0:00:00.384) 0:05:33.406 ********* 2026-03-17 01:02:35.942733 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-17 01:02:35.942738 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-17 01:02:35.942742 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-17 01:02:35.942746 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-17 01:02:35.942753 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-17 01:02:35.942757 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-17 01:02:35.942761 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.942764 | orchestrator | 2026-03-17 01:02:35.942768 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-17 01:02:35.942772 | orchestrator | Tuesday 17 March 2026 00:58:08 +0000 (0:00:36.362) 0:06:09.768 ********* 2026-03-17 01:02:35.942776 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.942780 | orchestrator | 2026-03-17 01:02:35.942783 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-17 01:02:35.942787 | orchestrator | Tuesday 17 March 2026 00:58:09 +0000 (0:00:01.307) 0:06:11.076 ********* 2026-03-17 01:02:35.942791 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.942795 | orchestrator | 2026-03-17 01:02:35.942798 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-17 01:02:35.942802 | orchestrator | Tuesday 17 March 2026 00:58:10 +0000 (0:00:00.256) 0:06:11.333 ********* 2026-03-17 01:02:35.942806 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.942810 | orchestrator | 2026-03-17 01:02:35.942814 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-17 01:02:35.942817 | orchestrator | Tuesday 17 March 2026 00:58:10 +0000 (0:00:00.125) 0:06:11.459 ********* 2026-03-17 01:02:35.942821 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-17 01:02:35.942825 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-17 01:02:35.942831 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-17 01:02:35.942835 | orchestrator | 2026-03-17 01:02:35.942839 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-17 01:02:35.942842 | orchestrator | Tuesday 17 March 2026 00:58:17 +0000 (0:00:06.745) 0:06:18.205 ********* 2026-03-17 01:02:35.942846 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-17 01:02:35.942850 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-17 01:02:35.942854 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-17 01:02:35.942858 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-17 01:02:35.942861 | orchestrator | 2026-03-17 01:02:35.942865 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 01:02:35.942869 | orchestrator | Tuesday 17 March 2026 00:58:21 +0000 (0:00:04.588) 0:06:22.793 ********* 2026-03-17 01:02:35.942873 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.942877 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.942880 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.942884 | orchestrator | 2026-03-17 01:02:35.942888 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-17 01:02:35.942892 | orchestrator | Tuesday 17 March 2026 00:58:22 +0000 (0:00:00.910) 0:06:23.704 ********* 2026-03-17 01:02:35.942895 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.942899 | orchestrator | 2026-03-17 01:02:35.942903 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-17 01:02:35.942907 | orchestrator | Tuesday 17 March 2026 00:58:22 +0000 (0:00:00.395) 0:06:24.100 ********* 2026-03-17 01:02:35.942911 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.942914 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.942918 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.942922 | orchestrator | 2026-03-17 01:02:35.942926 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-17 01:02:35.942932 | orchestrator | Tuesday 17 March 2026 00:58:23 +0000 (0:00:00.217) 0:06:24.317 ********* 2026-03-17 01:02:35.942936 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.942939 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.942943 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.942947 | orchestrator | 2026-03-17 01:02:35.942951 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-17 01:02:35.942955 | orchestrator | Tuesday 17 March 2026 00:58:24 +0000 (0:00:01.253) 0:06:25.571 ********* 2026-03-17 01:02:35.942958 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 01:02:35.942962 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 01:02:35.942966 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 01:02:35.942970 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.942973 | orchestrator | 2026-03-17 01:02:35.942977 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-17 01:02:35.942981 | orchestrator | Tuesday 17 March 2026 00:58:24 +0000 (0:00:00.536) 0:06:26.108 ********* 2026-03-17 01:02:35.942985 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.942988 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.942992 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.942996 | orchestrator | 2026-03-17 01:02:35.943000 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-17 01:02:35.943004 | orchestrator | 2026-03-17 01:02:35.943007 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 01:02:35.943025 | orchestrator | Tuesday 17 March 2026 00:58:25 +0000 (0:00:00.444) 0:06:26.552 ********* 2026-03-17 01:02:35.943029 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.943033 | orchestrator | 2026-03-17 01:02:35.943037 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 01:02:35.943041 | orchestrator | Tuesday 17 March 2026 00:58:25 +0000 (0:00:00.567) 0:06:27.119 ********* 2026-03-17 01:02:35.943044 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.943048 | orchestrator | 2026-03-17 01:02:35.943052 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 01:02:35.943056 | orchestrator | Tuesday 17 March 2026 00:58:26 +0000 (0:00:00.454) 0:06:27.574 ********* 2026-03-17 01:02:35.943060 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943063 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943067 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943071 | orchestrator | 2026-03-17 01:02:35.943075 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 01:02:35.943079 | orchestrator | Tuesday 17 March 2026 00:58:26 +0000 (0:00:00.264) 0:06:27.838 ********* 2026-03-17 01:02:35.943083 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943086 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943090 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943094 | orchestrator | 2026-03-17 01:02:35.943098 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 01:02:35.943102 | orchestrator | Tuesday 17 March 2026 00:58:27 +0000 (0:00:00.757) 0:06:28.596 ********* 2026-03-17 01:02:35.943105 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943109 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943113 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943117 | orchestrator | 2026-03-17 01:02:35.943121 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 01:02:35.943125 | orchestrator | Tuesday 17 March 2026 00:58:27 +0000 (0:00:00.573) 0:06:29.169 ********* 2026-03-17 01:02:35.943128 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943132 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943136 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943142 | orchestrator | 2026-03-17 01:02:35.943146 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 01:02:35.943152 | orchestrator | Tuesday 17 March 2026 00:58:28 +0000 (0:00:00.680) 0:06:29.850 ********* 2026-03-17 01:02:35.943156 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943159 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943163 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943167 | orchestrator | 2026-03-17 01:02:35.943171 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 01:02:35.943175 | orchestrator | Tuesday 17 March 2026 00:58:28 +0000 (0:00:00.286) 0:06:30.136 ********* 2026-03-17 01:02:35.943179 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943182 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943186 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943190 | orchestrator | 2026-03-17 01:02:35.943194 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 01:02:35.943198 | orchestrator | Tuesday 17 March 2026 00:58:29 +0000 (0:00:00.540) 0:06:30.677 ********* 2026-03-17 01:02:35.943202 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943205 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943209 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943213 | orchestrator | 2026-03-17 01:02:35.943217 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 01:02:35.943221 | orchestrator | Tuesday 17 March 2026 00:58:29 +0000 (0:00:00.291) 0:06:30.969 ********* 2026-03-17 01:02:35.943225 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943228 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943232 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943236 | orchestrator | 2026-03-17 01:02:35.943240 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 01:02:35.943244 | orchestrator | Tuesday 17 March 2026 00:58:30 +0000 (0:00:00.694) 0:06:31.663 ********* 2026-03-17 01:02:35.943247 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943251 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943255 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943259 | orchestrator | 2026-03-17 01:02:35.943263 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 01:02:35.943267 | orchestrator | Tuesday 17 March 2026 00:58:31 +0000 (0:00:00.660) 0:06:32.324 ********* 2026-03-17 01:02:35.943270 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943274 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943278 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943282 | orchestrator | 2026-03-17 01:02:35.943286 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 01:02:35.943289 | orchestrator | Tuesday 17 March 2026 00:58:31 +0000 (0:00:00.743) 0:06:33.068 ********* 2026-03-17 01:02:35.943293 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943297 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943301 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943305 | orchestrator | 2026-03-17 01:02:35.943308 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 01:02:35.943312 | orchestrator | Tuesday 17 March 2026 00:58:32 +0000 (0:00:00.277) 0:06:33.345 ********* 2026-03-17 01:02:35.943316 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943320 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943324 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943327 | orchestrator | 2026-03-17 01:02:35.943331 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 01:02:35.943335 | orchestrator | Tuesday 17 March 2026 00:58:32 +0000 (0:00:00.277) 0:06:33.622 ********* 2026-03-17 01:02:35.943339 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943343 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943347 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943350 | orchestrator | 2026-03-17 01:02:35.943354 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 01:02:35.943362 | orchestrator | Tuesday 17 March 2026 00:58:32 +0000 (0:00:00.235) 0:06:33.858 ********* 2026-03-17 01:02:35.943366 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943370 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943374 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943378 | orchestrator | 2026-03-17 01:02:35.943382 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 01:02:35.943385 | orchestrator | Tuesday 17 March 2026 00:58:33 +0000 (0:00:00.436) 0:06:34.294 ********* 2026-03-17 01:02:35.943389 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943393 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943397 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943401 | orchestrator | 2026-03-17 01:02:35.943404 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 01:02:35.943408 | orchestrator | Tuesday 17 March 2026 00:58:33 +0000 (0:00:00.241) 0:06:34.535 ********* 2026-03-17 01:02:35.943412 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943416 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943420 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943424 | orchestrator | 2026-03-17 01:02:35.943427 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 01:02:35.943431 | orchestrator | Tuesday 17 March 2026 00:58:33 +0000 (0:00:00.238) 0:06:34.774 ********* 2026-03-17 01:02:35.943435 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943439 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943442 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943446 | orchestrator | 2026-03-17 01:02:35.943450 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 01:02:35.943454 | orchestrator | Tuesday 17 March 2026 00:58:33 +0000 (0:00:00.227) 0:06:35.001 ********* 2026-03-17 01:02:35.943458 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943462 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943465 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943469 | orchestrator | 2026-03-17 01:02:35.943473 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 01:02:35.943479 | orchestrator | Tuesday 17 March 2026 00:58:34 +0000 (0:00:00.407) 0:06:35.409 ********* 2026-03-17 01:02:35.943486 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943492 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943498 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943508 | orchestrator | 2026-03-17 01:02:35.943516 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-17 01:02:35.943522 | orchestrator | Tuesday 17 March 2026 00:58:34 +0000 (0:00:00.374) 0:06:35.783 ********* 2026-03-17 01:02:35.943527 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943537 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943544 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943550 | orchestrator | 2026-03-17 01:02:35.943555 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-17 01:02:35.943561 | orchestrator | Tuesday 17 March 2026 00:58:34 +0000 (0:00:00.244) 0:06:36.028 ********* 2026-03-17 01:02:35.943566 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:02:35.943572 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:02:35.943578 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:02:35.943584 | orchestrator | 2026-03-17 01:02:35.943590 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-17 01:02:35.943596 | orchestrator | Tuesday 17 March 2026 00:58:35 +0000 (0:00:00.634) 0:06:36.662 ********* 2026-03-17 01:02:35.943602 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.943608 | orchestrator | 2026-03-17 01:02:35.943615 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-17 01:02:35.943626 | orchestrator | Tuesday 17 March 2026 00:58:36 +0000 (0:00:00.563) 0:06:37.225 ********* 2026-03-17 01:02:35.943633 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943639 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943643 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943646 | orchestrator | 2026-03-17 01:02:35.943650 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-17 01:02:35.943654 | orchestrator | Tuesday 17 March 2026 00:58:36 +0000 (0:00:00.198) 0:06:37.424 ********* 2026-03-17 01:02:35.943658 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943662 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943665 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943669 | orchestrator | 2026-03-17 01:02:35.943673 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-17 01:02:35.943677 | orchestrator | Tuesday 17 March 2026 00:58:36 +0000 (0:00:00.204) 0:06:37.628 ********* 2026-03-17 01:02:35.943681 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943685 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943688 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943692 | orchestrator | 2026-03-17 01:02:35.943696 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-17 01:02:35.943700 | orchestrator | Tuesday 17 March 2026 00:58:37 +0000 (0:00:00.655) 0:06:38.284 ********* 2026-03-17 01:02:35.943704 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.943708 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.943711 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.943726 | orchestrator | 2026-03-17 01:02:35.943732 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-17 01:02:35.943736 | orchestrator | Tuesday 17 March 2026 00:58:37 +0000 (0:00:00.289) 0:06:38.573 ********* 2026-03-17 01:02:35.943740 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-17 01:02:35.943744 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-17 01:02:35.943748 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-17 01:02:35.943756 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-17 01:02:35.943760 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-17 01:02:35.943764 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-17 01:02:35.943768 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-17 01:02:35.943772 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-17 01:02:35.943776 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-17 01:02:35.943779 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-17 01:02:35.943783 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-17 01:02:35.943787 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-17 01:02:35.943791 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-17 01:02:35.943795 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-17 01:02:35.943798 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-17 01:02:35.943802 | orchestrator | 2026-03-17 01:02:35.943806 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-17 01:02:35.943810 | orchestrator | Tuesday 17 March 2026 00:58:41 +0000 (0:00:03.702) 0:06:42.275 ********* 2026-03-17 01:02:35.943814 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.943820 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.943824 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.943828 | orchestrator | 2026-03-17 01:02:35.943832 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-17 01:02:35.943836 | orchestrator | Tuesday 17 March 2026 00:58:41 +0000 (0:00:00.259) 0:06:42.535 ********* 2026-03-17 01:02:35.943839 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.943843 | orchestrator | 2026-03-17 01:02:35.943847 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-17 01:02:35.943853 | orchestrator | Tuesday 17 March 2026 00:58:41 +0000 (0:00:00.571) 0:06:43.107 ********* 2026-03-17 01:02:35.943857 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-17 01:02:35.943861 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-17 01:02:35.943865 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-17 01:02:35.943868 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-17 01:02:35.943872 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-17 01:02:35.943876 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-17 01:02:35.943880 | orchestrator | 2026-03-17 01:02:35.943884 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-17 01:02:35.943887 | orchestrator | Tuesday 17 March 2026 00:58:42 +0000 (0:00:00.850) 0:06:43.957 ********* 2026-03-17 01:02:35.943891 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.943895 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 01:02:35.943899 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 01:02:35.943903 | orchestrator | 2026-03-17 01:02:35.943906 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-17 01:02:35.943910 | orchestrator | Tuesday 17 March 2026 00:58:44 +0000 (0:00:02.012) 0:06:45.970 ********* 2026-03-17 01:02:35.943914 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:02:35.943918 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 01:02:35.943921 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.943925 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:02:35.943929 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-17 01:02:35.943933 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.943937 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:02:35.943940 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-17 01:02:35.943944 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.943948 | orchestrator | 2026-03-17 01:02:35.943952 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-17 01:02:35.943956 | orchestrator | Tuesday 17 March 2026 00:58:46 +0000 (0:00:01.332) 0:06:47.302 ********* 2026-03-17 01:02:35.943960 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.943963 | orchestrator | 2026-03-17 01:02:35.943967 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-17 01:02:35.943971 | orchestrator | Tuesday 17 March 2026 00:58:48 +0000 (0:00:02.260) 0:06:49.563 ********* 2026-03-17 01:02:35.943975 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.943979 | orchestrator | 2026-03-17 01:02:35.943982 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-17 01:02:35.943986 | orchestrator | Tuesday 17 March 2026 00:58:48 +0000 (0:00:00.572) 0:06:50.135 ********* 2026-03-17 01:02:35.943990 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-16ca22cf-64f9-579d-994c-d43933026c5f', 'data_vg': 'ceph-16ca22cf-64f9-579d-994c-d43933026c5f'}) 2026-03-17 01:02:35.943995 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-50c44467-b3f7-539a-99b7-df2211d1583b', 'data_vg': 'ceph-50c44467-b3f7-539a-99b7-df2211d1583b'}) 2026-03-17 01:02:35.944005 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d77b95b6-dc37-5eed-9a6e-c7871424e120', 'data_vg': 'ceph-d77b95b6-dc37-5eed-9a6e-c7871424e120'}) 2026-03-17 01:02:35.944009 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9465b490-647b-5adb-8e2e-a5649c4bc673', 'data_vg': 'ceph-9465b490-647b-5adb-8e2e-a5649c4bc673'}) 2026-03-17 01:02:35.944013 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ec88a4df-1f79-596d-b281-118c477c78df', 'data_vg': 'ceph-ec88a4df-1f79-596d-b281-118c477c78df'}) 2026-03-17 01:02:35.944017 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5', 'data_vg': 'ceph-b13aeae0-05c6-5bfd-ada4-b68b1762c1d5'}) 2026-03-17 01:02:35.944021 | orchestrator | 2026-03-17 01:02:35.944024 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-17 01:02:35.944028 | orchestrator | Tuesday 17 March 2026 00:59:23 +0000 (0:00:34.541) 0:07:24.677 ********* 2026-03-17 01:02:35.944032 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944036 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944040 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944043 | orchestrator | 2026-03-17 01:02:35.944047 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-17 01:02:35.944051 | orchestrator | Tuesday 17 March 2026 00:59:23 +0000 (0:00:00.507) 0:07:25.185 ********* 2026-03-17 01:02:35.944055 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.944059 | orchestrator | 2026-03-17 01:02:35.944062 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-17 01:02:35.944066 | orchestrator | Tuesday 17 March 2026 00:59:24 +0000 (0:00:00.487) 0:07:25.673 ********* 2026-03-17 01:02:35.944070 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.944074 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.944078 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.944081 | orchestrator | 2026-03-17 01:02:35.944085 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-17 01:02:35.944089 | orchestrator | Tuesday 17 March 2026 00:59:25 +0000 (0:00:00.692) 0:07:26.365 ********* 2026-03-17 01:02:35.944093 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.944097 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.944100 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.944104 | orchestrator | 2026-03-17 01:02:35.944108 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-17 01:02:35.944112 | orchestrator | Tuesday 17 March 2026 00:59:28 +0000 (0:00:03.086) 0:07:29.452 ********* 2026-03-17 01:02:35.944116 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.944120 | orchestrator | 2026-03-17 01:02:35.944124 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-17 01:02:35.944127 | orchestrator | Tuesday 17 March 2026 00:59:28 +0000 (0:00:00.460) 0:07:29.912 ********* 2026-03-17 01:02:35.944131 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.944135 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.944139 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.944143 | orchestrator | 2026-03-17 01:02:35.944147 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-17 01:02:35.944150 | orchestrator | Tuesday 17 March 2026 00:59:29 +0000 (0:00:01.112) 0:07:31.024 ********* 2026-03-17 01:02:35.944154 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.944158 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.944162 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.944166 | orchestrator | 2026-03-17 01:02:35.944169 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-17 01:02:35.944173 | orchestrator | Tuesday 17 March 2026 00:59:30 +0000 (0:00:01.160) 0:07:32.184 ********* 2026-03-17 01:02:35.944179 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.944188 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.944192 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.944196 | orchestrator | 2026-03-17 01:02:35.944200 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-17 01:02:35.944204 | orchestrator | Tuesday 17 March 2026 00:59:32 +0000 (0:00:01.807) 0:07:33.991 ********* 2026-03-17 01:02:35.944208 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944211 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944215 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944219 | orchestrator | 2026-03-17 01:02:35.944223 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-17 01:02:35.944227 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:00.256) 0:07:34.248 ********* 2026-03-17 01:02:35.944230 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944234 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944238 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944242 | orchestrator | 2026-03-17 01:02:35.944246 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-17 01:02:35.944249 | orchestrator | Tuesday 17 March 2026 00:59:33 +0000 (0:00:00.267) 0:07:34.516 ********* 2026-03-17 01:02:35.944253 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-17 01:02:35.944257 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-17 01:02:35.944261 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 01:02:35.944265 | orchestrator | ok: [testbed-node-3] => (item=2) 2026-03-17 01:02:35.944268 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-17 01:02:35.944272 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-17 01:02:35.944276 | orchestrator | 2026-03-17 01:02:35.944280 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-17 01:02:35.944284 | orchestrator | Tuesday 17 March 2026 00:59:34 +0000 (0:00:01.260) 0:07:35.777 ********* 2026-03-17 01:02:35.944288 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-17 01:02:35.944291 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-17 01:02:35.944297 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-03-17 01:02:35.944301 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-17 01:02:35.944305 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-17 01:02:35.944309 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-17 01:02:35.944313 | orchestrator | 2026-03-17 01:02:35.944316 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-17 01:02:35.944320 | orchestrator | Tuesday 17 March 2026 00:59:36 +0000 (0:00:02.238) 0:07:38.015 ********* 2026-03-17 01:02:35.944324 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-17 01:02:35.944328 | orchestrator | changed: [testbed-node-5] => (item=0) 2026-03-17 01:02:35.944332 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-17 01:02:35.944335 | orchestrator | changed: [testbed-node-3] => (item=2) 2026-03-17 01:02:35.944339 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-17 01:02:35.944363 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-17 01:02:35.944367 | orchestrator | 2026-03-17 01:02:35.944371 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-17 01:02:35.944375 | orchestrator | Tuesday 17 March 2026 00:59:40 +0000 (0:00:03.359) 0:07:41.375 ********* 2026-03-17 01:02:35.944378 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944382 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944386 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.944390 | orchestrator | 2026-03-17 01:02:35.944393 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-17 01:02:35.944397 | orchestrator | Tuesday 17 March 2026 00:59:42 +0000 (0:00:01.973) 0:07:43.348 ********* 2026-03-17 01:02:35.944401 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944405 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944411 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-17 01:02:35.944415 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.944419 | orchestrator | 2026-03-17 01:02:35.944423 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-17 01:02:35.944426 | orchestrator | Tuesday 17 March 2026 00:59:54 +0000 (0:00:12.787) 0:07:56.136 ********* 2026-03-17 01:02:35.944430 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944434 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944438 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944441 | orchestrator | 2026-03-17 01:02:35.944449 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 01:02:35.944452 | orchestrator | Tuesday 17 March 2026 00:59:55 +0000 (0:00:00.823) 0:07:56.959 ********* 2026-03-17 01:02:35.944456 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944460 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944464 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944468 | orchestrator | 2026-03-17 01:02:35.944471 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-17 01:02:35.944475 | orchestrator | Tuesday 17 March 2026 00:59:56 +0000 (0:00:00.541) 0:07:57.501 ********* 2026-03-17 01:02:35.944479 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.944483 | orchestrator | 2026-03-17 01:02:35.944487 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-17 01:02:35.944491 | orchestrator | Tuesday 17 March 2026 00:59:56 +0000 (0:00:00.504) 0:07:58.005 ********* 2026-03-17 01:02:35.944494 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.944498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.944502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.944506 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944510 | orchestrator | 2026-03-17 01:02:35.944513 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-17 01:02:35.944517 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:00.377) 0:07:58.383 ********* 2026-03-17 01:02:35.944521 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944525 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944528 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944532 | orchestrator | 2026-03-17 01:02:35.944536 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-17 01:02:35.944540 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:00.289) 0:07:58.672 ********* 2026-03-17 01:02:35.944544 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944547 | orchestrator | 2026-03-17 01:02:35.944551 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-17 01:02:35.944555 | orchestrator | Tuesday 17 March 2026 00:59:57 +0000 (0:00:00.199) 0:07:58.871 ********* 2026-03-17 01:02:35.944559 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944562 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944566 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944570 | orchestrator | 2026-03-17 01:02:35.944574 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-17 01:02:35.944577 | orchestrator | Tuesday 17 March 2026 00:59:58 +0000 (0:00:00.648) 0:07:59.520 ********* 2026-03-17 01:02:35.944581 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944585 | orchestrator | 2026-03-17 01:02:35.944589 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-17 01:02:35.944592 | orchestrator | Tuesday 17 March 2026 00:59:58 +0000 (0:00:00.230) 0:07:59.750 ********* 2026-03-17 01:02:35.944596 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944600 | orchestrator | 2026-03-17 01:02:35.944604 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-17 01:02:35.944610 | orchestrator | Tuesday 17 March 2026 00:59:58 +0000 (0:00:00.219) 0:07:59.970 ********* 2026-03-17 01:02:35.944614 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944618 | orchestrator | 2026-03-17 01:02:35.944621 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-17 01:02:35.944625 | orchestrator | Tuesday 17 March 2026 00:59:58 +0000 (0:00:00.115) 0:08:00.086 ********* 2026-03-17 01:02:35.944632 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944636 | orchestrator | 2026-03-17 01:02:35.944640 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-17 01:02:35.944643 | orchestrator | Tuesday 17 March 2026 00:59:59 +0000 (0:00:00.209) 0:08:00.295 ********* 2026-03-17 01:02:35.944647 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944651 | orchestrator | 2026-03-17 01:02:35.944655 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-17 01:02:35.944659 | orchestrator | Tuesday 17 March 2026 00:59:59 +0000 (0:00:00.210) 0:08:00.505 ********* 2026-03-17 01:02:35.944662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.944666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.944670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.944674 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944678 | orchestrator | 2026-03-17 01:02:35.944682 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-17 01:02:35.944685 | orchestrator | Tuesday 17 March 2026 00:59:59 +0000 (0:00:00.415) 0:08:00.921 ********* 2026-03-17 01:02:35.944689 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944693 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944697 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944701 | orchestrator | 2026-03-17 01:02:35.944704 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-17 01:02:35.944708 | orchestrator | Tuesday 17 March 2026 01:00:00 +0000 (0:00:00.289) 0:08:01.210 ********* 2026-03-17 01:02:35.944712 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944739 | orchestrator | 2026-03-17 01:02:35.944743 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-17 01:02:35.944747 | orchestrator | Tuesday 17 March 2026 01:00:00 +0000 (0:00:00.845) 0:08:02.055 ********* 2026-03-17 01:02:35.944751 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944755 | orchestrator | 2026-03-17 01:02:35.944759 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-17 01:02:35.944762 | orchestrator | 2026-03-17 01:02:35.944766 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 01:02:35.944770 | orchestrator | Tuesday 17 March 2026 01:00:01 +0000 (0:00:00.644) 0:08:02.699 ********* 2026-03-17 01:02:35.944776 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.944781 | orchestrator | 2026-03-17 01:02:35.944785 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 01:02:35.944788 | orchestrator | Tuesday 17 March 2026 01:00:02 +0000 (0:00:01.239) 0:08:03.939 ********* 2026-03-17 01:02:35.944792 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.944796 | orchestrator | 2026-03-17 01:02:35.944800 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 01:02:35.944804 | orchestrator | Tuesday 17 March 2026 01:00:03 +0000 (0:00:01.208) 0:08:05.147 ********* 2026-03-17 01:02:35.944807 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944811 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944815 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944819 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.944825 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.944829 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.944833 | orchestrator | 2026-03-17 01:02:35.944837 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 01:02:35.944841 | orchestrator | Tuesday 17 March 2026 01:00:05 +0000 (0:00:01.226) 0:08:06.374 ********* 2026-03-17 01:02:35.944844 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.944848 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.944852 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.944856 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.944860 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.944863 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.944867 | orchestrator | 2026-03-17 01:02:35.944871 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 01:02:35.944875 | orchestrator | Tuesday 17 March 2026 01:00:05 +0000 (0:00:00.657) 0:08:07.031 ********* 2026-03-17 01:02:35.944879 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.944882 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.944886 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.944890 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.944894 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.944897 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.944901 | orchestrator | 2026-03-17 01:02:35.944905 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 01:02:35.944909 | orchestrator | Tuesday 17 March 2026 01:00:06 +0000 (0:00:01.020) 0:08:08.052 ********* 2026-03-17 01:02:35.944913 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.944916 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.944920 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.944924 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.944928 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.944931 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.944935 | orchestrator | 2026-03-17 01:02:35.944939 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 01:02:35.944943 | orchestrator | Tuesday 17 March 2026 01:00:07 +0000 (0:00:00.730) 0:08:08.782 ********* 2026-03-17 01:02:35.944947 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944951 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944954 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944958 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.944962 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.944966 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.944970 | orchestrator | 2026-03-17 01:02:35.944974 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 01:02:35.944980 | orchestrator | Tuesday 17 March 2026 01:00:08 +0000 (0:00:00.923) 0:08:09.706 ********* 2026-03-17 01:02:35.944984 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.944988 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.944992 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.944995 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.944999 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.945003 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.945007 | orchestrator | 2026-03-17 01:02:35.945011 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 01:02:35.945015 | orchestrator | Tuesday 17 March 2026 01:00:09 +0000 (0:00:00.659) 0:08:10.365 ********* 2026-03-17 01:02:35.945018 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.945022 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.945026 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.945030 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.945033 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.945037 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.945041 | orchestrator | 2026-03-17 01:02:35.945045 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 01:02:35.945051 | orchestrator | Tuesday 17 March 2026 01:00:09 +0000 (0:00:00.524) 0:08:10.890 ********* 2026-03-17 01:02:35.945055 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945058 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945062 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945066 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.945070 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.945074 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.945077 | orchestrator | 2026-03-17 01:02:35.945081 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 01:02:35.945085 | orchestrator | Tuesday 17 March 2026 01:00:10 +0000 (0:00:01.235) 0:08:12.125 ********* 2026-03-17 01:02:35.945089 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945093 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945096 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945100 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.945104 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.945108 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.945111 | orchestrator | 2026-03-17 01:02:35.945115 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 01:02:35.945119 | orchestrator | Tuesday 17 March 2026 01:00:11 +0000 (0:00:00.941) 0:08:13.067 ********* 2026-03-17 01:02:35.945123 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.945127 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.945131 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.945134 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.945140 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.945144 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.945148 | orchestrator | 2026-03-17 01:02:35.945152 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 01:02:35.945156 | orchestrator | Tuesday 17 March 2026 01:00:12 +0000 (0:00:00.792) 0:08:13.860 ********* 2026-03-17 01:02:35.945160 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.945163 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.945167 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.945171 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.945175 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.945179 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.945182 | orchestrator | 2026-03-17 01:02:35.945186 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 01:02:35.945190 | orchestrator | Tuesday 17 March 2026 01:00:13 +0000 (0:00:00.568) 0:08:14.428 ********* 2026-03-17 01:02:35.945194 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945198 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945202 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945205 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.945209 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.945213 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.945217 | orchestrator | 2026-03-17 01:02:35.945221 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 01:02:35.945224 | orchestrator | Tuesday 17 March 2026 01:00:14 +0000 (0:00:00.821) 0:08:15.250 ********* 2026-03-17 01:02:35.945228 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945232 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945236 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945240 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.945243 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.945247 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.945251 | orchestrator | 2026-03-17 01:02:35.945255 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 01:02:35.945259 | orchestrator | Tuesday 17 March 2026 01:00:14 +0000 (0:00:00.573) 0:08:15.824 ********* 2026-03-17 01:02:35.945262 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945266 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945270 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945276 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.945280 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.945284 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.945288 | orchestrator | 2026-03-17 01:02:35.945291 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 01:02:35.945295 | orchestrator | Tuesday 17 March 2026 01:00:15 +0000 (0:00:00.824) 0:08:16.648 ********* 2026-03-17 01:02:35.945299 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.945303 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.945307 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.945310 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.945314 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.945318 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.945322 | orchestrator | 2026-03-17 01:02:35.945326 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 01:02:35.945329 | orchestrator | Tuesday 17 March 2026 01:00:16 +0000 (0:00:00.552) 0:08:17.200 ********* 2026-03-17 01:02:35.945333 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.945337 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.945341 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:02:35.945344 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.945348 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:02:35.945352 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:02:35.945356 | orchestrator | 2026-03-17 01:02:35.945360 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 01:02:35.945366 | orchestrator | Tuesday 17 March 2026 01:00:16 +0000 (0:00:00.895) 0:08:18.096 ********* 2026-03-17 01:02:35.945369 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.945373 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.945377 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.945381 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.945385 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.945388 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.945392 | orchestrator | 2026-03-17 01:02:35.945396 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 01:02:35.945400 | orchestrator | Tuesday 17 March 2026 01:00:17 +0000 (0:00:00.593) 0:08:18.689 ********* 2026-03-17 01:02:35.945404 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945408 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945411 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945415 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.945419 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.945423 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.945426 | orchestrator | 2026-03-17 01:02:35.945430 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 01:02:35.945434 | orchestrator | Tuesday 17 March 2026 01:00:18 +0000 (0:00:00.849) 0:08:19.538 ********* 2026-03-17 01:02:35.945438 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945441 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945445 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945449 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.945453 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.945456 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.945460 | orchestrator | 2026-03-17 01:02:35.945464 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-17 01:02:35.945468 | orchestrator | Tuesday 17 March 2026 01:00:19 +0000 (0:00:01.278) 0:08:20.817 ********* 2026-03-17 01:02:35.945472 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.945475 | orchestrator | 2026-03-17 01:02:35.945479 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-17 01:02:35.945483 | orchestrator | Tuesday 17 March 2026 01:00:23 +0000 (0:00:04.036) 0:08:24.853 ********* 2026-03-17 01:02:35.945487 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.945491 | orchestrator | 2026-03-17 01:02:35.945497 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-17 01:02:35.945501 | orchestrator | Tuesday 17 March 2026 01:00:25 +0000 (0:00:01.861) 0:08:26.714 ********* 2026-03-17 01:02:35.945504 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.945510 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.945514 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.945518 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.945522 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.945526 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.945529 | orchestrator | 2026-03-17 01:02:35.945533 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-17 01:02:35.945537 | orchestrator | Tuesday 17 March 2026 01:00:26 +0000 (0:00:01.408) 0:08:28.123 ********* 2026-03-17 01:02:35.945541 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.945545 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.945548 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.945552 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.945556 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.945560 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.945564 | orchestrator | 2026-03-17 01:02:35.945567 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-17 01:02:35.945571 | orchestrator | Tuesday 17 March 2026 01:00:28 +0000 (0:00:01.194) 0:08:29.318 ********* 2026-03-17 01:02:35.945575 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.945579 | orchestrator | 2026-03-17 01:02:35.945583 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-17 01:02:35.945587 | orchestrator | Tuesday 17 March 2026 01:00:29 +0000 (0:00:01.199) 0:08:30.517 ********* 2026-03-17 01:02:35.945591 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.945595 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.945598 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.945602 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.945606 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.945610 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.945613 | orchestrator | 2026-03-17 01:02:35.945617 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-17 01:02:35.945621 | orchestrator | Tuesday 17 March 2026 01:00:30 +0000 (0:00:01.570) 0:08:32.088 ********* 2026-03-17 01:02:35.945625 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.945629 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.945632 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.945636 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.945640 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.945644 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.945647 | orchestrator | 2026-03-17 01:02:35.945651 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-17 01:02:35.945655 | orchestrator | Tuesday 17 March 2026 01:00:34 +0000 (0:00:03.451) 0:08:35.540 ********* 2026-03-17 01:02:35.945659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:02:35.945663 | orchestrator | 2026-03-17 01:02:35.945667 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-17 01:02:35.945670 | orchestrator | Tuesday 17 March 2026 01:00:35 +0000 (0:00:01.172) 0:08:36.712 ********* 2026-03-17 01:02:35.945674 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945678 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945682 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945686 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.945689 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.945693 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.945697 | orchestrator | 2026-03-17 01:02:35.945703 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-17 01:02:35.945709 | orchestrator | Tuesday 17 March 2026 01:00:36 +0000 (0:00:00.610) 0:08:37.322 ********* 2026-03-17 01:02:35.945713 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.945732 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.945736 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.945740 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:02:35.945744 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:02:35.945747 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:02:35.945751 | orchestrator | 2026-03-17 01:02:35.945755 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-17 01:02:35.945759 | orchestrator | Tuesday 17 March 2026 01:00:38 +0000 (0:00:02.560) 0:08:39.883 ********* 2026-03-17 01:02:35.945762 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945766 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945770 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945774 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:02:35.945778 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:02:35.945781 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:02:35.945785 | orchestrator | 2026-03-17 01:02:35.945789 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-17 01:02:35.945793 | orchestrator | 2026-03-17 01:02:35.945796 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 01:02:35.945800 | orchestrator | Tuesday 17 March 2026 01:00:39 +0000 (0:00:00.845) 0:08:40.728 ********* 2026-03-17 01:02:35.945804 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.945808 | orchestrator | 2026-03-17 01:02:35.945812 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 01:02:35.945816 | orchestrator | Tuesday 17 March 2026 01:00:40 +0000 (0:00:00.771) 0:08:41.499 ********* 2026-03-17 01:02:35.945821 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.945828 | orchestrator | 2026-03-17 01:02:35.945833 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 01:02:35.945839 | orchestrator | Tuesday 17 March 2026 01:00:40 +0000 (0:00:00.501) 0:08:42.001 ********* 2026-03-17 01:02:35.945847 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.945856 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.945861 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.945866 | orchestrator | 2026-03-17 01:02:35.945872 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 01:02:35.945881 | orchestrator | Tuesday 17 March 2026 01:00:41 +0000 (0:00:00.519) 0:08:42.521 ********* 2026-03-17 01:02:35.945887 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945892 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945897 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945903 | orchestrator | 2026-03-17 01:02:35.945908 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 01:02:35.945914 | orchestrator | Tuesday 17 March 2026 01:00:42 +0000 (0:00:00.715) 0:08:43.236 ********* 2026-03-17 01:02:35.945920 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945925 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945930 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945936 | orchestrator | 2026-03-17 01:02:35.945941 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 01:02:35.945947 | orchestrator | Tuesday 17 March 2026 01:00:42 +0000 (0:00:00.782) 0:08:44.019 ********* 2026-03-17 01:02:35.945953 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.945959 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.945965 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.945970 | orchestrator | 2026-03-17 01:02:35.945976 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 01:02:35.945990 | orchestrator | Tuesday 17 March 2026 01:00:43 +0000 (0:00:00.771) 0:08:44.791 ********* 2026-03-17 01:02:35.945996 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946001 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946006 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946046 | orchestrator | 2026-03-17 01:02:35.946054 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 01:02:35.946060 | orchestrator | Tuesday 17 March 2026 01:00:44 +0000 (0:00:00.556) 0:08:45.348 ********* 2026-03-17 01:02:35.946065 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946070 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946076 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946082 | orchestrator | 2026-03-17 01:02:35.946088 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 01:02:35.946094 | orchestrator | Tuesday 17 March 2026 01:00:44 +0000 (0:00:00.316) 0:08:45.664 ********* 2026-03-17 01:02:35.946100 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946107 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946113 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946119 | orchestrator | 2026-03-17 01:02:35.946125 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 01:02:35.946130 | orchestrator | Tuesday 17 March 2026 01:00:44 +0000 (0:00:00.297) 0:08:45.961 ********* 2026-03-17 01:02:35.946136 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.946141 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.946147 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.946153 | orchestrator | 2026-03-17 01:02:35.946159 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 01:02:35.946164 | orchestrator | Tuesday 17 March 2026 01:00:45 +0000 (0:00:00.662) 0:08:46.624 ********* 2026-03-17 01:02:35.946170 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.946176 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.946181 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.946187 | orchestrator | 2026-03-17 01:02:35.946193 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 01:02:35.946198 | orchestrator | Tuesday 17 March 2026 01:00:46 +0000 (0:00:01.003) 0:08:47.628 ********* 2026-03-17 01:02:35.946204 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946209 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946215 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946221 | orchestrator | 2026-03-17 01:02:35.946227 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 01:02:35.946238 | orchestrator | Tuesday 17 March 2026 01:00:46 +0000 (0:00:00.307) 0:08:47.936 ********* 2026-03-17 01:02:35.946245 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946251 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946257 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946262 | orchestrator | 2026-03-17 01:02:35.946268 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 01:02:35.946275 | orchestrator | Tuesday 17 March 2026 01:00:47 +0000 (0:00:00.310) 0:08:48.246 ********* 2026-03-17 01:02:35.946281 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.946286 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.946291 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.946297 | orchestrator | 2026-03-17 01:02:35.946303 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 01:02:35.946309 | orchestrator | Tuesday 17 March 2026 01:00:47 +0000 (0:00:00.311) 0:08:48.557 ********* 2026-03-17 01:02:35.946315 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.946320 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.946326 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.946333 | orchestrator | 2026-03-17 01:02:35.946340 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 01:02:35.946346 | orchestrator | Tuesday 17 March 2026 01:00:47 +0000 (0:00:00.599) 0:08:49.157 ********* 2026-03-17 01:02:35.946358 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.946365 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.946371 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.946378 | orchestrator | 2026-03-17 01:02:35.946383 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 01:02:35.946387 | orchestrator | Tuesday 17 March 2026 01:00:48 +0000 (0:00:00.373) 0:08:49.530 ********* 2026-03-17 01:02:35.946391 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946395 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946399 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946402 | orchestrator | 2026-03-17 01:02:35.946406 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 01:02:35.946410 | orchestrator | Tuesday 17 March 2026 01:00:48 +0000 (0:00:00.306) 0:08:49.836 ********* 2026-03-17 01:02:35.946414 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946418 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946422 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946425 | orchestrator | 2026-03-17 01:02:35.946429 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 01:02:35.946433 | orchestrator | Tuesday 17 March 2026 01:00:48 +0000 (0:00:00.293) 0:08:50.129 ********* 2026-03-17 01:02:35.946440 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946444 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946448 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946452 | orchestrator | 2026-03-17 01:02:35.946456 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 01:02:35.946460 | orchestrator | Tuesday 17 March 2026 01:00:49 +0000 (0:00:00.552) 0:08:50.682 ********* 2026-03-17 01:02:35.946463 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.946467 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.946471 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.946475 | orchestrator | 2026-03-17 01:02:35.946479 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 01:02:35.946483 | orchestrator | Tuesday 17 March 2026 01:00:49 +0000 (0:00:00.323) 0:08:51.005 ********* 2026-03-17 01:02:35.946486 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.946490 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.946494 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.946498 | orchestrator | 2026-03-17 01:02:35.946501 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-17 01:02:35.946505 | orchestrator | Tuesday 17 March 2026 01:00:50 +0000 (0:00:00.721) 0:08:51.727 ********* 2026-03-17 01:02:35.946509 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946513 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946517 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-17 01:02:35.946521 | orchestrator | 2026-03-17 01:02:35.946524 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-17 01:02:35.946528 | orchestrator | Tuesday 17 March 2026 01:00:51 +0000 (0:00:00.661) 0:08:52.388 ********* 2026-03-17 01:02:35.946532 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.946536 | orchestrator | 2026-03-17 01:02:35.946540 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-17 01:02:35.946543 | orchestrator | Tuesday 17 March 2026 01:00:53 +0000 (0:00:01.998) 0:08:54.386 ********* 2026-03-17 01:02:35.946548 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-17 01:02:35.946553 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946557 | orchestrator | 2026-03-17 01:02:35.946561 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-17 01:02:35.946565 | orchestrator | Tuesday 17 March 2026 01:00:53 +0000 (0:00:00.188) 0:08:54.574 ********* 2026-03-17 01:02:35.946574 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:02:35.946582 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:02:35.946586 | orchestrator | 2026-03-17 01:02:35.946594 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-17 01:02:35.946598 | orchestrator | Tuesday 17 March 2026 01:01:01 +0000 (0:00:08.291) 0:09:02.866 ********* 2026-03-17 01:02:35.946602 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:02:35.946606 | orchestrator | 2026-03-17 01:02:35.946609 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-17 01:02:35.946613 | orchestrator | Tuesday 17 March 2026 01:01:04 +0000 (0:00:02.938) 0:09:05.805 ********* 2026-03-17 01:02:35.946617 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.946621 | orchestrator | 2026-03-17 01:02:35.946625 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-17 01:02:35.946629 | orchestrator | Tuesday 17 March 2026 01:01:05 +0000 (0:00:00.784) 0:09:06.589 ********* 2026-03-17 01:02:35.946633 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-17 01:02:35.946637 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-17 01:02:35.946641 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-17 01:02:35.946648 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-17 01:02:35.946652 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-17 01:02:35.946656 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-17 01:02:35.946659 | orchestrator | 2026-03-17 01:02:35.946663 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-17 01:02:35.946667 | orchestrator | Tuesday 17 March 2026 01:01:06 +0000 (0:00:01.127) 0:09:07.716 ********* 2026-03-17 01:02:35.946671 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.946675 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 01:02:35.946678 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 01:02:35.946682 | orchestrator | 2026-03-17 01:02:35.946686 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-17 01:02:35.946690 | orchestrator | Tuesday 17 March 2026 01:01:08 +0000 (0:00:02.392) 0:09:10.108 ********* 2026-03-17 01:02:35.946694 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:02:35.946702 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 01:02:35.946706 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.946710 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:02:35.946714 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-17 01:02:35.946766 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.946770 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:02:35.946773 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-17 01:02:35.946777 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.946781 | orchestrator | 2026-03-17 01:02:35.946785 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-17 01:02:35.946789 | orchestrator | Tuesday 17 March 2026 01:01:10 +0000 (0:00:01.318) 0:09:11.427 ********* 2026-03-17 01:02:35.946792 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.946796 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.946803 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.946807 | orchestrator | 2026-03-17 01:02:35.946811 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-17 01:02:35.946815 | orchestrator | Tuesday 17 March 2026 01:01:12 +0000 (0:00:02.760) 0:09:14.187 ********* 2026-03-17 01:02:35.946819 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.946822 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.946826 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.946831 | orchestrator | 2026-03-17 01:02:35.946837 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-17 01:02:35.946843 | orchestrator | Tuesday 17 March 2026 01:01:13 +0000 (0:00:00.689) 0:09:14.877 ********* 2026-03-17 01:02:35.946851 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.946856 | orchestrator | 2026-03-17 01:02:35.946861 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-17 01:02:35.946871 | orchestrator | Tuesday 17 March 2026 01:01:14 +0000 (0:00:00.501) 0:09:15.378 ********* 2026-03-17 01:02:35.946877 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.946883 | orchestrator | 2026-03-17 01:02:35.946889 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-17 01:02:35.946896 | orchestrator | Tuesday 17 March 2026 01:01:14 +0000 (0:00:00.735) 0:09:16.114 ********* 2026-03-17 01:02:35.946902 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.946908 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.946915 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.946922 | orchestrator | 2026-03-17 01:02:35.946926 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-17 01:02:35.946930 | orchestrator | Tuesday 17 March 2026 01:01:16 +0000 (0:00:01.135) 0:09:17.249 ********* 2026-03-17 01:02:35.946933 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.946937 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.946941 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.946945 | orchestrator | 2026-03-17 01:02:35.946949 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-17 01:02:35.946952 | orchestrator | Tuesday 17 March 2026 01:01:17 +0000 (0:00:01.054) 0:09:18.304 ********* 2026-03-17 01:02:35.946956 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.946960 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.946964 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.946968 | orchestrator | 2026-03-17 01:02:35.946971 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-17 01:02:35.946978 | orchestrator | Tuesday 17 March 2026 01:01:18 +0000 (0:00:01.825) 0:09:20.130 ********* 2026-03-17 01:02:35.946982 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.946986 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.946990 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.946994 | orchestrator | 2026-03-17 01:02:35.946998 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-17 01:02:35.947001 | orchestrator | Tuesday 17 March 2026 01:01:20 +0000 (0:00:02.016) 0:09:22.146 ********* 2026-03-17 01:02:35.947005 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947009 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947013 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947017 | orchestrator | 2026-03-17 01:02:35.947020 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 01:02:35.947024 | orchestrator | Tuesday 17 March 2026 01:01:22 +0000 (0:00:01.138) 0:09:23.285 ********* 2026-03-17 01:02:35.947028 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.947032 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.947035 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.947039 | orchestrator | 2026-03-17 01:02:35.947043 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-17 01:02:35.947050 | orchestrator | Tuesday 17 March 2026 01:01:22 +0000 (0:00:00.843) 0:09:24.128 ********* 2026-03-17 01:02:35.947054 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.947058 | orchestrator | 2026-03-17 01:02:35.947062 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-17 01:02:35.947066 | orchestrator | Tuesday 17 March 2026 01:01:23 +0000 (0:00:00.478) 0:09:24.607 ********* 2026-03-17 01:02:35.947069 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947073 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947077 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947081 | orchestrator | 2026-03-17 01:02:35.947085 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-17 01:02:35.947088 | orchestrator | Tuesday 17 March 2026 01:01:23 +0000 (0:00:00.289) 0:09:24.896 ********* 2026-03-17 01:02:35.947092 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.947096 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.947100 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.947104 | orchestrator | 2026-03-17 01:02:35.947107 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-17 01:02:35.947111 | orchestrator | Tuesday 17 March 2026 01:01:25 +0000 (0:00:01.338) 0:09:26.235 ********* 2026-03-17 01:02:35.947117 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.947121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.947125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.947129 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947133 | orchestrator | 2026-03-17 01:02:35.947137 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-17 01:02:35.947141 | orchestrator | Tuesday 17 March 2026 01:01:25 +0000 (0:00:00.576) 0:09:26.812 ********* 2026-03-17 01:02:35.947144 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947148 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947152 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947156 | orchestrator | 2026-03-17 01:02:35.947159 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-17 01:02:35.947163 | orchestrator | 2026-03-17 01:02:35.947167 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-17 01:02:35.947171 | orchestrator | Tuesday 17 March 2026 01:01:26 +0000 (0:00:00.579) 0:09:27.391 ********* 2026-03-17 01:02:35.947175 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.947179 | orchestrator | 2026-03-17 01:02:35.947183 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-17 01:02:35.947187 | orchestrator | Tuesday 17 March 2026 01:01:26 +0000 (0:00:00.725) 0:09:28.116 ********* 2026-03-17 01:02:35.947191 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-03-17 01:02:35.947195 | orchestrator | 2026-03-17 01:02:35.947199 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-17 01:02:35.947202 | orchestrator | Tuesday 17 March 2026 01:01:27 +0000 (0:00:00.618) 0:09:28.734 ********* 2026-03-17 01:02:35.947206 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947210 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947214 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947217 | orchestrator | 2026-03-17 01:02:35.947221 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-17 01:02:35.947225 | orchestrator | Tuesday 17 March 2026 01:01:28 +0000 (0:00:00.519) 0:09:29.254 ********* 2026-03-17 01:02:35.947229 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947233 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947236 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947240 | orchestrator | 2026-03-17 01:02:35.947247 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-17 01:02:35.947251 | orchestrator | Tuesday 17 March 2026 01:01:28 +0000 (0:00:00.749) 0:09:30.004 ********* 2026-03-17 01:02:35.947254 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947258 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947262 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947266 | orchestrator | 2026-03-17 01:02:35.947270 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-17 01:02:35.947273 | orchestrator | Tuesday 17 March 2026 01:01:29 +0000 (0:00:00.708) 0:09:30.712 ********* 2026-03-17 01:02:35.947277 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947281 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947285 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947288 | orchestrator | 2026-03-17 01:02:35.947292 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-17 01:02:35.947296 | orchestrator | Tuesday 17 March 2026 01:01:30 +0000 (0:00:00.699) 0:09:31.412 ********* 2026-03-17 01:02:35.947300 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947306 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947310 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947314 | orchestrator | 2026-03-17 01:02:35.947317 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-17 01:02:35.947321 | orchestrator | Tuesday 17 March 2026 01:01:30 +0000 (0:00:00.613) 0:09:32.025 ********* 2026-03-17 01:02:35.947325 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947329 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947333 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947336 | orchestrator | 2026-03-17 01:02:35.947340 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-17 01:02:35.947344 | orchestrator | Tuesday 17 March 2026 01:01:31 +0000 (0:00:00.338) 0:09:32.364 ********* 2026-03-17 01:02:35.947348 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947351 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947355 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947359 | orchestrator | 2026-03-17 01:02:35.947363 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-17 01:02:35.947367 | orchestrator | Tuesday 17 March 2026 01:01:31 +0000 (0:00:00.300) 0:09:32.664 ********* 2026-03-17 01:02:35.947370 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947374 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947378 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947382 | orchestrator | 2026-03-17 01:02:35.947386 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-17 01:02:35.947389 | orchestrator | Tuesday 17 March 2026 01:01:32 +0000 (0:00:00.751) 0:09:33.415 ********* 2026-03-17 01:02:35.947393 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947397 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947401 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947405 | orchestrator | 2026-03-17 01:02:35.947409 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-17 01:02:35.947412 | orchestrator | Tuesday 17 March 2026 01:01:33 +0000 (0:00:00.973) 0:09:34.389 ********* 2026-03-17 01:02:35.947416 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947420 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947424 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947427 | orchestrator | 2026-03-17 01:02:35.947431 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-17 01:02:35.947435 | orchestrator | Tuesday 17 March 2026 01:01:33 +0000 (0:00:00.279) 0:09:34.668 ********* 2026-03-17 01:02:35.947439 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947443 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947449 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947453 | orchestrator | 2026-03-17 01:02:35.947456 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-17 01:02:35.947463 | orchestrator | Tuesday 17 March 2026 01:01:33 +0000 (0:00:00.288) 0:09:34.956 ********* 2026-03-17 01:02:35.947466 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947470 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947474 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947478 | orchestrator | 2026-03-17 01:02:35.947482 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-17 01:02:35.947485 | orchestrator | Tuesday 17 March 2026 01:01:34 +0000 (0:00:00.334) 0:09:35.290 ********* 2026-03-17 01:02:35.947489 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947493 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947497 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947501 | orchestrator | 2026-03-17 01:02:35.947504 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-17 01:02:35.947508 | orchestrator | Tuesday 17 March 2026 01:01:34 +0000 (0:00:00.528) 0:09:35.819 ********* 2026-03-17 01:02:35.947512 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947516 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947520 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947524 | orchestrator | 2026-03-17 01:02:35.947527 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-17 01:02:35.947531 | orchestrator | Tuesday 17 March 2026 01:01:34 +0000 (0:00:00.312) 0:09:36.132 ********* 2026-03-17 01:02:35.947535 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947539 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947543 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947547 | orchestrator | 2026-03-17 01:02:35.947550 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-17 01:02:35.947554 | orchestrator | Tuesday 17 March 2026 01:01:35 +0000 (0:00:00.299) 0:09:36.431 ********* 2026-03-17 01:02:35.947558 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947562 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947566 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947569 | orchestrator | 2026-03-17 01:02:35.947573 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-17 01:02:35.947577 | orchestrator | Tuesday 17 March 2026 01:01:35 +0000 (0:00:00.299) 0:09:36.731 ********* 2026-03-17 01:02:35.947581 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947585 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947589 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947592 | orchestrator | 2026-03-17 01:02:35.947596 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-17 01:02:35.947600 | orchestrator | Tuesday 17 March 2026 01:01:36 +0000 (0:00:00.540) 0:09:37.272 ********* 2026-03-17 01:02:35.947604 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947607 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947611 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947615 | orchestrator | 2026-03-17 01:02:35.947619 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-17 01:02:35.947623 | orchestrator | Tuesday 17 March 2026 01:01:36 +0000 (0:00:00.342) 0:09:37.615 ********* 2026-03-17 01:02:35.947627 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.947630 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.947634 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.947638 | orchestrator | 2026-03-17 01:02:35.947641 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-17 01:02:35.947647 | orchestrator | Tuesday 17 March 2026 01:01:36 +0000 (0:00:00.565) 0:09:38.180 ********* 2026-03-17 01:02:35.947656 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.947667 | orchestrator | 2026-03-17 01:02:35.947673 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-17 01:02:35.947679 | orchestrator | Tuesday 17 March 2026 01:01:37 +0000 (0:00:00.726) 0:09:38.906 ********* 2026-03-17 01:02:35.947685 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.947696 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 01:02:35.947703 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 01:02:35.947709 | orchestrator | 2026-03-17 01:02:35.947726 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-17 01:02:35.947730 | orchestrator | Tuesday 17 March 2026 01:01:39 +0000 (0:00:02.058) 0:09:40.965 ********* 2026-03-17 01:02:35.947734 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:02:35.947738 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-17 01:02:35.947742 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.947746 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:02:35.947750 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-17 01:02:35.947753 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.947757 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:02:35.947761 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-17 01:02:35.947765 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.947769 | orchestrator | 2026-03-17 01:02:35.947772 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-17 01:02:35.947776 | orchestrator | Tuesday 17 March 2026 01:01:41 +0000 (0:00:01.290) 0:09:42.255 ********* 2026-03-17 01:02:35.947780 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947784 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.947787 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.947791 | orchestrator | 2026-03-17 01:02:35.947795 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-17 01:02:35.947799 | orchestrator | Tuesday 17 March 2026 01:01:41 +0000 (0:00:00.317) 0:09:42.573 ********* 2026-03-17 01:02:35.947803 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.947806 | orchestrator | 2026-03-17 01:02:35.947810 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-17 01:02:35.947814 | orchestrator | Tuesday 17 March 2026 01:01:42 +0000 (0:00:00.753) 0:09:43.327 ********* 2026-03-17 01:02:35.947821 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.947825 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.947829 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.947833 | orchestrator | 2026-03-17 01:02:35.947836 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-17 01:02:35.947840 | orchestrator | Tuesday 17 March 2026 01:01:42 +0000 (0:00:00.844) 0:09:44.171 ********* 2026-03-17 01:02:35.947844 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.947848 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-17 01:02:35.947852 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.947855 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-17 01:02:35.947859 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.947863 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-17 01:02:35.947867 | orchestrator | 2026-03-17 01:02:35.947870 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-17 01:02:35.947874 | orchestrator | Tuesday 17 March 2026 01:01:47 +0000 (0:00:04.298) 0:09:48.470 ********* 2026-03-17 01:02:35.947881 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.947884 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 01:02:35.947888 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.947892 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 01:02:35.947896 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:02:35.947899 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 01:02:35.947903 | orchestrator | 2026-03-17 01:02:35.947907 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-17 01:02:35.947911 | orchestrator | Tuesday 17 March 2026 01:01:50 +0000 (0:00:02.776) 0:09:51.246 ********* 2026-03-17 01:02:35.947914 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:02:35.947918 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.947922 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:02:35.947926 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.947930 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:02:35.947933 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.947937 | orchestrator | 2026-03-17 01:02:35.947944 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-17 01:02:35.947948 | orchestrator | Tuesday 17 March 2026 01:01:51 +0000 (0:00:01.229) 0:09:52.475 ********* 2026-03-17 01:02:35.947952 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-17 01:02:35.947956 | orchestrator | 2026-03-17 01:02:35.947960 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-17 01:02:35.947964 | orchestrator | Tuesday 17 March 2026 01:01:51 +0000 (0:00:00.218) 0:09:52.694 ********* 2026-03-17 01:02:35.947967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.947971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.947975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.947979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.947983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.947987 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.947990 | orchestrator | 2026-03-17 01:02:35.947994 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-17 01:02:35.947998 | orchestrator | Tuesday 17 March 2026 01:01:52 +0000 (0:00:00.546) 0:09:53.240 ********* 2026-03-17 01:02:35.948002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.948006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.948011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.948015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.948019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-17 01:02:35.948023 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.948029 | orchestrator | 2026-03-17 01:02:35.948033 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-17 01:02:35.948037 | orchestrator | Tuesday 17 March 2026 01:01:52 +0000 (0:00:00.566) 0:09:53.807 ********* 2026-03-17 01:02:35.948040 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 01:02:35.948044 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 01:02:35.948048 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 01:02:35.948052 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 01:02:35.948056 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-17 01:02:35.948060 | orchestrator | 2026-03-17 01:02:35.948063 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-17 01:02:35.948067 | orchestrator | Tuesday 17 March 2026 01:02:23 +0000 (0:00:30.646) 0:10:24.454 ********* 2026-03-17 01:02:35.948071 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.948075 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.948079 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.948082 | orchestrator | 2026-03-17 01:02:35.948086 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-17 01:02:35.948090 | orchestrator | Tuesday 17 March 2026 01:02:23 +0000 (0:00:00.285) 0:10:24.739 ********* 2026-03-17 01:02:35.948094 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.948098 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.948101 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.948105 | orchestrator | 2026-03-17 01:02:35.948109 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-17 01:02:35.948113 | orchestrator | Tuesday 17 March 2026 01:02:24 +0000 (0:00:00.554) 0:10:25.293 ********* 2026-03-17 01:02:35.948117 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.948120 | orchestrator | 2026-03-17 01:02:35.948124 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-17 01:02:35.948128 | orchestrator | Tuesday 17 March 2026 01:02:24 +0000 (0:00:00.512) 0:10:25.806 ********* 2026-03-17 01:02:35.948134 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.948138 | orchestrator | 2026-03-17 01:02:35.948142 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-17 01:02:35.948145 | orchestrator | Tuesday 17 March 2026 01:02:25 +0000 (0:00:00.680) 0:10:26.487 ********* 2026-03-17 01:02:35.948149 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.948153 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.948157 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.948160 | orchestrator | 2026-03-17 01:02:35.948164 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-17 01:02:35.948168 | orchestrator | Tuesday 17 March 2026 01:02:26 +0000 (0:00:01.395) 0:10:27.883 ********* 2026-03-17 01:02:35.948172 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.948176 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.948179 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.948183 | orchestrator | 2026-03-17 01:02:35.948187 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-17 01:02:35.948191 | orchestrator | Tuesday 17 March 2026 01:02:27 +0000 (0:00:01.133) 0:10:29.017 ********* 2026-03-17 01:02:35.948194 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:02:35.948201 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:02:35.948204 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:02:35.948208 | orchestrator | 2026-03-17 01:02:35.948212 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-17 01:02:35.948216 | orchestrator | Tuesday 17 March 2026 01:02:29 +0000 (0:00:01.897) 0:10:30.914 ********* 2026-03-17 01:02:35.948220 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.948223 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.948227 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-17 01:02:35.948231 | orchestrator | 2026-03-17 01:02:35.948235 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-17 01:02:35.948238 | orchestrator | Tuesday 17 March 2026 01:02:32 +0000 (0:00:02.600) 0:10:33.514 ********* 2026-03-17 01:02:35.948242 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.948246 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.948252 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.948256 | orchestrator | 2026-03-17 01:02:35.948259 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-17 01:02:35.948263 | orchestrator | Tuesday 17 March 2026 01:02:32 +0000 (0:00:00.275) 0:10:33.790 ********* 2026-03-17 01:02:35.948267 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:02:35.948271 | orchestrator | 2026-03-17 01:02:35.948274 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-17 01:02:35.948278 | orchestrator | Tuesday 17 March 2026 01:02:33 +0000 (0:00:00.637) 0:10:34.428 ********* 2026-03-17 01:02:35.948282 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.948286 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.948289 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.948293 | orchestrator | 2026-03-17 01:02:35.948297 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-17 01:02:35.948301 | orchestrator | Tuesday 17 March 2026 01:02:33 +0000 (0:00:00.317) 0:10:34.745 ********* 2026-03-17 01:02:35.948304 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.948308 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:02:35.948312 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:02:35.948316 | orchestrator | 2026-03-17 01:02:35.948319 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-17 01:02:35.948323 | orchestrator | Tuesday 17 March 2026 01:02:33 +0000 (0:00:00.303) 0:10:35.048 ********* 2026-03-17 01:02:35.948327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:02:35.948331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:02:35.948335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:02:35.948338 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:02:35.948342 | orchestrator | 2026-03-17 01:02:35.948346 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-17 01:02:35.948350 | orchestrator | Tuesday 17 March 2026 01:02:34 +0000 (0:00:00.891) 0:10:35.939 ********* 2026-03-17 01:02:35.948354 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:02:35.948357 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:02:35.948361 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:02:35.948365 | orchestrator | 2026-03-17 01:02:35.948369 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:02:35.948373 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-17 01:02:35.948377 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-17 01:02:35.948383 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-17 01:02:35.948387 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-17 01:02:35.948391 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-17 01:02:35.948397 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-17 01:02:35.948401 | orchestrator | 2026-03-17 01:02:35.948405 | orchestrator | 2026-03-17 01:02:35.948408 | orchestrator | 2026-03-17 01:02:35.948412 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:02:35.948416 | orchestrator | Tuesday 17 March 2026 01:02:35 +0000 (0:00:00.252) 0:10:36.192 ********* 2026-03-17 01:02:35.948420 | orchestrator | =============================================================================== 2026-03-17 01:02:35.948424 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 49.29s 2026-03-17 01:02:35.948428 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.36s 2026-03-17 01:02:35.948431 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 34.54s 2026-03-17 01:02:35.948435 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.65s 2026-03-17 01:02:35.948439 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.86s 2026-03-17 01:02:35.948443 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.76s 2026-03-17 01:02:35.948447 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.79s 2026-03-17 01:02:35.948450 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.83s 2026-03-17 01:02:35.948454 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.07s 2026-03-17 01:02:35.948458 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.29s 2026-03-17 01:02:35.948462 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.02s 2026-03-17 01:02:35.948465 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.75s 2026-03-17 01:02:35.948469 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.59s 2026-03-17 01:02:35.948473 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.30s 2026-03-17 01:02:35.948477 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.04s 2026-03-17 01:02:35.948480 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.79s 2026-03-17 01:02:35.948487 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.71s 2026-03-17 01:02:35.948491 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.70s 2026-03-17 01:02:35.948495 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.45s 2026-03-17 01:02:35.948499 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.36s 2026-03-17 01:02:35.948503 | orchestrator | 2026-03-17 01:02:35 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:35.948506 | orchestrator | 2026-03-17 01:02:35 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:35.948510 | orchestrator | 2026-03-17 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:38.980356 | orchestrator | 2026-03-17 01:02:38 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:02:38.982091 | orchestrator | 2026-03-17 01:02:38 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:38.985586 | orchestrator | 2026-03-17 01:02:38 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:38.986933 | orchestrator | 2026-03-17 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:42.024127 | orchestrator | 2026-03-17 01:02:42 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:02:42.025530 | orchestrator | 2026-03-17 01:02:42 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:42.027500 | orchestrator | 2026-03-17 01:02:42 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:42.027564 | orchestrator | 2026-03-17 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:45.065699 | orchestrator | 2026-03-17 01:02:45 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:02:45.069256 | orchestrator | 2026-03-17 01:02:45 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:45.069338 | orchestrator | 2026-03-17 01:02:45 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:45.069358 | orchestrator | 2026-03-17 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:48.122845 | orchestrator | 2026-03-17 01:02:48 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:02:48.124760 | orchestrator | 2026-03-17 01:02:48 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:48.126543 | orchestrator | 2026-03-17 01:02:48 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:48.126580 | orchestrator | 2026-03-17 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:51.174100 | orchestrator | 2026-03-17 01:02:51 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:02:51.174805 | orchestrator | 2026-03-17 01:02:51 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:51.175684 | orchestrator | 2026-03-17 01:02:51 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:51.175723 | orchestrator | 2026-03-17 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:54.219910 | orchestrator | 2026-03-17 01:02:54 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:02:54.220645 | orchestrator | 2026-03-17 01:02:54 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:54.221672 | orchestrator | 2026-03-17 01:02:54 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:54.221732 | orchestrator | 2026-03-17 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:02:57.265367 | orchestrator | 2026-03-17 01:02:57 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:02:57.267216 | orchestrator | 2026-03-17 01:02:57 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:02:57.269054 | orchestrator | 2026-03-17 01:02:57 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:02:57.269117 | orchestrator | 2026-03-17 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:00.302735 | orchestrator | 2026-03-17 01:03:00 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:00.305373 | orchestrator | 2026-03-17 01:03:00 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:00.306191 | orchestrator | 2026-03-17 01:03:00 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:03:00.306242 | orchestrator | 2026-03-17 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:03.347789 | orchestrator | 2026-03-17 01:03:03 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:03.348631 | orchestrator | 2026-03-17 01:03:03 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:03.352522 | orchestrator | 2026-03-17 01:03:03 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:03:03.352686 | orchestrator | 2026-03-17 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:06.395818 | orchestrator | 2026-03-17 01:03:06 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:06.397435 | orchestrator | 2026-03-17 01:03:06 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:06.399044 | orchestrator | 2026-03-17 01:03:06 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:03:06.399123 | orchestrator | 2026-03-17 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:09.437527 | orchestrator | 2026-03-17 01:03:09 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:09.438533 | orchestrator | 2026-03-17 01:03:09 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:09.439438 | orchestrator | 2026-03-17 01:03:09 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state STARTED 2026-03-17 01:03:09.439468 | orchestrator | 2026-03-17 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:12.481503 | orchestrator | 2026-03-17 01:03:12 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:12.483001 | orchestrator | 2026-03-17 01:03:12 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:12.485339 | orchestrator | 2026-03-17 01:03:12 | INFO  | Task 2b3a2686-6f63-41d1-acc5-6688c1c84867 is in state SUCCESS 2026-03-17 01:03:12.486459 | orchestrator | 2026-03-17 01:03:12.486579 | orchestrator | 2026-03-17 01:03:12.486591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:03:12.486724 | orchestrator | 2026-03-17 01:03:12.486734 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:03:12.486739 | orchestrator | Tuesday 17 March 2026 01:00:44 +0000 (0:00:00.335) 0:00:00.335 ********* 2026-03-17 01:03:12.486744 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:12.486750 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:12.486754 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:12.486759 | orchestrator | 2026-03-17 01:03:12.486763 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:03:12.486768 | orchestrator | Tuesday 17 March 2026 01:00:44 +0000 (0:00:00.279) 0:00:00.614 ********* 2026-03-17 01:03:12.486773 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-17 01:03:12.486778 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-17 01:03:12.486783 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-17 01:03:12.486787 | orchestrator | 2026-03-17 01:03:12.486792 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-17 01:03:12.486796 | orchestrator | 2026-03-17 01:03:12.486801 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 01:03:12.486805 | orchestrator | Tuesday 17 March 2026 01:00:44 +0000 (0:00:00.288) 0:00:00.902 ********* 2026-03-17 01:03:12.486810 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:03:12.486828 | orchestrator | 2026-03-17 01:03:12.486833 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-17 01:03:12.486841 | orchestrator | Tuesday 17 March 2026 01:00:45 +0000 (0:00:00.613) 0:00:01.515 ********* 2026-03-17 01:03:12.486849 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 01:03:12.486856 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 01:03:12.486864 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-17 01:03:12.486872 | orchestrator | 2026-03-17 01:03:12.486879 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-17 01:03:12.486887 | orchestrator | Tuesday 17 March 2026 01:00:46 +0000 (0:00:01.067) 0:00:02.583 ********* 2026-03-17 01:03:12.486906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.486918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.486936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.486945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.486964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.486972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.486980 | orchestrator | 2026-03-17 01:03:12.486987 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 01:03:12.486994 | orchestrator | Tuesday 17 March 2026 01:00:48 +0000 (0:00:01.388) 0:00:03.971 ********* 2026-03-17 01:03:12.487002 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:03:12.487010 | orchestrator | 2026-03-17 01:03:12.487016 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-17 01:03:12.487030 | orchestrator | Tuesday 17 March 2026 01:00:48 +0000 (0:00:00.486) 0:00:04.457 ********* 2026-03-17 01:03:12.487038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.487051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.487062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.487071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.487085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.487098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.487106 | orchestrator | 2026-03-17 01:03:12.487117 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-17 01:03:12.487125 | orchestrator | Tuesday 17 March 2026 01:00:51 +0000 (0:00:03.024) 0:00:07.482 ********* 2026-03-17 01:03:12.487132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:03:12.487140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:03:12.487154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:03:12.487167 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:12.487175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:03:12.487187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:03:12.487195 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:12.487208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:03:12.487221 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:12.487228 | orchestrator | 2026-03-17 01:03:12.487235 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-17 01:03:12.487243 | orchestrator | Tuesday 17 March 2026 01:00:52 +0000 (0:00:00.740) 0:00:08.222 ********* 2026-03-17 01:03:12.487250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:03:12.487270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:03:12.487279 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:12.487286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:03:12.487298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:03:12.487311 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:12.487319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:03:12.487331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:03:12.487339 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:12.487347 | orchestrator | 2026-03-17 01:03:12.487355 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-17 01:03:12.487363 | orchestrator | Tuesday 17 March 2026 01:00:53 +0000 (0:00:00.859) 0:00:09.081 ********* 2026-03-17 01:03:12.487371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.487390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.487399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.487411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.487421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.487439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.487447 | orchestrator | 2026-03-17 01:03:12.487454 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-17 01:03:12.487462 | orchestrator | Tuesday 17 March 2026 01:00:55 +0000 (0:00:02.810) 0:00:11.892 ********* 2026-03-17 01:03:12.487470 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:12.487477 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:03:12.487485 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:03:12.487492 | orchestrator | 2026-03-17 01:03:12.487499 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-17 01:03:12.487507 | orchestrator | Tuesday 17 March 2026 01:00:58 +0000 (0:00:02.473) 0:00:14.365 ********* 2026-03-17 01:03:12.487515 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:12.487523 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:03:12.487530 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:03:12.487537 | orchestrator | 2026-03-17 01:03:12.487544 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-17 01:03:12.487552 | orchestrator | Tuesday 17 March 2026 01:00:59 +0000 (0:00:01.354) 0:00:15.719 ********* 2026-03-17 01:03:12.487563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.487573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.487586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:03:12.487600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.487613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.487622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-17 01:03:12.487635 | orchestrator | 2026-03-17 01:03:12.487644 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-17 01:03:12.487651 | orchestrator | Tuesday 17 March 2026 01:01:01 +0000 (0:00:02.215) 0:00:17.935 ********* 2026-03-17 01:03:12.487659 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:03:12.487666 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:03:12.487673 | orchestrator | } 2026-03-17 01:03:12.487699 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:03:12.487707 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:03:12.487714 | orchestrator | } 2026-03-17 01:03:12.487721 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:03:12.487728 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:03:12.487735 | orchestrator | } 2026-03-17 01:03:12.487742 | orchestrator | 2026-03-17 01:03:12.487749 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:03:12.487761 | orchestrator | Tuesday 17 March 2026 01:01:02 +0000 (0:00:00.535) 0:00:18.471 ********* 2026-03-17 01:03:12.487770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:03:12.487779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:03:12.487787 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:12.487798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:03:12.487816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:03:12.487824 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:12.487832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:03:12.487842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-17 01:03:12.487853 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:12.487861 | orchestrator | 2026-03-17 01:03:12.487868 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 01:03:12.487875 | orchestrator | Tuesday 17 March 2026 01:01:03 +0000 (0:00:00.876) 0:00:19.347 ********* 2026-03-17 01:03:12.487882 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:12.487889 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:12.487896 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:12.487903 | orchestrator | 2026-03-17 01:03:12.487910 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-17 01:03:12.487917 | orchestrator | Tuesday 17 March 2026 01:01:03 +0000 (0:00:00.268) 0:00:19.615 ********* 2026-03-17 01:03:12.487924 | orchestrator | 2026-03-17 01:03:12.487931 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-17 01:03:12.487939 | orchestrator | Tuesday 17 March 2026 01:01:03 +0000 (0:00:00.061) 0:00:19.677 ********* 2026-03-17 01:03:12.487946 | orchestrator | 2026-03-17 01:03:12.487954 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-17 01:03:12.487961 | orchestrator | Tuesday 17 March 2026 01:01:03 +0000 (0:00:00.064) 0:00:19.741 ********* 2026-03-17 01:03:12.487969 | orchestrator | 2026-03-17 01:03:12.487975 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-17 01:03:12.487983 | orchestrator | Tuesday 17 March 2026 01:01:03 +0000 (0:00:00.217) 0:00:19.958 ********* 2026-03-17 01:03:12.487990 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:12.487998 | orchestrator | 2026-03-17 01:03:12.488005 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-17 01:03:12.488013 | orchestrator | Tuesday 17 March 2026 01:01:04 +0000 (0:00:00.181) 0:00:20.140 ********* 2026-03-17 01:03:12.488021 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:12.488029 | orchestrator | 2026-03-17 01:03:12.488036 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-17 01:03:12.488044 | orchestrator | Tuesday 17 March 2026 01:01:04 +0000 (0:00:00.180) 0:00:20.321 ********* 2026-03-17 01:03:12.488052 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:12.488059 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:03:12.488067 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:03:12.488075 | orchestrator | 2026-03-17 01:03:12.488083 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-17 01:03:12.488090 | orchestrator | Tuesday 17 March 2026 01:01:51 +0000 (0:00:47.339) 0:01:07.661 ********* 2026-03-17 01:03:12.488099 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:12.488107 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:03:12.488114 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:03:12.488122 | orchestrator | 2026-03-17 01:03:12.488127 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-17 01:03:12.488132 | orchestrator | Tuesday 17 March 2026 01:02:55 +0000 (0:01:04.032) 0:02:11.694 ********* 2026-03-17 01:03:12.488142 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:03:12.488148 | orchestrator | 2026-03-17 01:03:12.488152 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-17 01:03:12.488157 | orchestrator | Tuesday 17 March 2026 01:02:56 +0000 (0:00:00.565) 0:02:12.259 ********* 2026-03-17 01:03:12.488162 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:12.488167 | orchestrator | 2026-03-17 01:03:12.488172 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-17 01:03:12.488177 | orchestrator | Tuesday 17 March 2026 01:02:59 +0000 (0:00:02.742) 0:02:15.002 ********* 2026-03-17 01:03:12.488181 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:12.488186 | orchestrator | 2026-03-17 01:03:12.488191 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-17 01:03:12.488195 | orchestrator | Tuesday 17 March 2026 01:03:01 +0000 (0:00:02.265) 0:02:17.268 ********* 2026-03-17 01:03:12.488205 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:12.488210 | orchestrator | 2026-03-17 01:03:12.488215 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-17 01:03:12.488219 | orchestrator | Tuesday 17 March 2026 01:03:03 +0000 (0:00:02.394) 0:02:19.662 ********* 2026-03-17 01:03:12.488224 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:12.488229 | orchestrator | 2026-03-17 01:03:12.488233 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-17 01:03:12.488238 | orchestrator | Tuesday 17 March 2026 01:03:06 +0000 (0:00:03.143) 0:02:22.806 ********* 2026-03-17 01:03:12.488243 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:12.488247 | orchestrator | 2026-03-17 01:03:12.488252 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:03:12.488257 | orchestrator | testbed-node-0 : ok=20  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:03:12.488263 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 01:03:12.488267 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-17 01:03:12.488272 | orchestrator | 2026-03-17 01:03:12.488277 | orchestrator | 2026-03-17 01:03:12.488281 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:03:12.488286 | orchestrator | Tuesday 17 March 2026 01:03:09 +0000 (0:00:02.400) 0:02:25.206 ********* 2026-03-17 01:03:12.488291 | orchestrator | =============================================================================== 2026-03-17 01:03:12.488295 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 64.03s 2026-03-17 01:03:12.488300 | orchestrator | opensearch : Restart opensearch container ------------------------------ 47.34s 2026-03-17 01:03:12.488305 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.14s 2026-03-17 01:03:12.488315 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.02s 2026-03-17 01:03:12.488320 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.81s 2026-03-17 01:03:12.488325 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.74s 2026-03-17 01:03:12.488329 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.47s 2026-03-17 01:03:12.488334 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.40s 2026-03-17 01:03:12.488348 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.39s 2026-03-17 01:03:12.488353 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.27s 2026-03-17 01:03:12.488363 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.22s 2026-03-17 01:03:12.488368 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.39s 2026-03-17 01:03:12.488373 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.35s 2026-03-17 01:03:12.488378 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.07s 2026-03-17 01:03:12.488383 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.88s 2026-03-17 01:03:12.488387 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.86s 2026-03-17 01:03:12.488392 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.74s 2026-03-17 01:03:12.488397 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.61s 2026-03-17 01:03:12.488401 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-17 01:03:12.488406 | orchestrator | service-check-containers : opensearch | Notify handlers to restart containers --- 0.54s 2026-03-17 01:03:12.488411 | orchestrator | 2026-03-17 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:15.525932 | orchestrator | 2026-03-17 01:03:15 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:15.527580 | orchestrator | 2026-03-17 01:03:15 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:15.527627 | orchestrator | 2026-03-17 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:18.561211 | orchestrator | 2026-03-17 01:03:18 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:18.562726 | orchestrator | 2026-03-17 01:03:18 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:18.562813 | orchestrator | 2026-03-17 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:21.604215 | orchestrator | 2026-03-17 01:03:21 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:21.605958 | orchestrator | 2026-03-17 01:03:21 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:21.606306 | orchestrator | 2026-03-17 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:24.654201 | orchestrator | 2026-03-17 01:03:24 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:24.656003 | orchestrator | 2026-03-17 01:03:24 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:24.656408 | orchestrator | 2026-03-17 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:27.702972 | orchestrator | 2026-03-17 01:03:27 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:27.704936 | orchestrator | 2026-03-17 01:03:27 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:27.704976 | orchestrator | 2026-03-17 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:30.751248 | orchestrator | 2026-03-17 01:03:30 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:30.752533 | orchestrator | 2026-03-17 01:03:30 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:30.752627 | orchestrator | 2026-03-17 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:33.792909 | orchestrator | 2026-03-17 01:03:33 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:33.794216 | orchestrator | 2026-03-17 01:03:33 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:33.794289 | orchestrator | 2026-03-17 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:36.841008 | orchestrator | 2026-03-17 01:03:36 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:36.842752 | orchestrator | 2026-03-17 01:03:36 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:36.843028 | orchestrator | 2026-03-17 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:39.882341 | orchestrator | 2026-03-17 01:03:39 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:39.884047 | orchestrator | 2026-03-17 01:03:39 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:39.884255 | orchestrator | 2026-03-17 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:42.927787 | orchestrator | 2026-03-17 01:03:42 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:42.929972 | orchestrator | 2026-03-17 01:03:42 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:42.930081 | orchestrator | 2026-03-17 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:45.966929 | orchestrator | 2026-03-17 01:03:45 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:45.968181 | orchestrator | 2026-03-17 01:03:45 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:45.968249 | orchestrator | 2026-03-17 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:49.001072 | orchestrator | 2026-03-17 01:03:49 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:49.002942 | orchestrator | 2026-03-17 01:03:49 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:49.002999 | orchestrator | 2026-03-17 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:52.042964 | orchestrator | 2026-03-17 01:03:52 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:52.044944 | orchestrator | 2026-03-17 01:03:52 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:52.045054 | orchestrator | 2026-03-17 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:55.091933 | orchestrator | 2026-03-17 01:03:55 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:55.095613 | orchestrator | 2026-03-17 01:03:55 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state STARTED 2026-03-17 01:03:55.095848 | orchestrator | 2026-03-17 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:03:58.142597 | orchestrator | 2026-03-17 01:03:58 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:03:58.144228 | orchestrator | 2026-03-17 01:03:58 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:03:58.151451 | orchestrator | 2026-03-17 01:03:58.151510 | orchestrator | 2026-03-17 01:03:58.151518 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-17 01:03:58.151525 | orchestrator | 2026-03-17 01:03:58.151532 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-17 01:03:58.151539 | orchestrator | Tuesday 17 March 2026 01:00:44 +0000 (0:00:00.107) 0:00:00.107 ********* 2026-03-17 01:03:58.151545 | orchestrator | ok: [localhost] => { 2026-03-17 01:03:58.151553 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-17 01:03:58.151560 | orchestrator | } 2026-03-17 01:03:58.151567 | orchestrator | 2026-03-17 01:03:58.151574 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-17 01:03:58.151579 | orchestrator | Tuesday 17 March 2026 01:00:44 +0000 (0:00:00.052) 0:00:00.160 ********* 2026-03-17 01:03:58.151583 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-17 01:03:58.151729 | orchestrator | ...ignoring 2026-03-17 01:03:58.151741 | orchestrator | 2026-03-17 01:03:58.151748 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-17 01:03:58.151755 | orchestrator | Tuesday 17 March 2026 01:00:47 +0000 (0:00:02.985) 0:00:03.145 ********* 2026-03-17 01:03:58.151761 | orchestrator | skipping: [localhost] 2026-03-17 01:03:58.151767 | orchestrator | 2026-03-17 01:03:58.151774 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-17 01:03:58.151781 | orchestrator | Tuesday 17 March 2026 01:00:47 +0000 (0:00:00.100) 0:00:03.248 ********* 2026-03-17 01:03:58.151788 | orchestrator | ok: [localhost] 2026-03-17 01:03:58.151795 | orchestrator | 2026-03-17 01:03:58.151973 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:03:58.151990 | orchestrator | 2026-03-17 01:03:58.151994 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:03:58.151998 | orchestrator | Tuesday 17 March 2026 01:00:47 +0000 (0:00:00.248) 0:00:03.497 ********* 2026-03-17 01:03:58.152002 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.152006 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:58.152010 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:58.152013 | orchestrator | 2026-03-17 01:03:58.152017 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:03:58.152021 | orchestrator | Tuesday 17 March 2026 01:00:47 +0000 (0:00:00.293) 0:00:03.791 ********* 2026-03-17 01:03:58.152025 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-17 01:03:58.152030 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-17 01:03:58.152040 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-17 01:03:58.152044 | orchestrator | 2026-03-17 01:03:58.152047 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-17 01:03:58.152051 | orchestrator | 2026-03-17 01:03:58.152055 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-17 01:03:58.152059 | orchestrator | Tuesday 17 March 2026 01:00:48 +0000 (0:00:00.401) 0:00:04.192 ********* 2026-03-17 01:03:58.152063 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-17 01:03:58.152067 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-17 01:03:58.152071 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-17 01:03:58.152075 | orchestrator | 2026-03-17 01:03:58.152078 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 01:03:58.152082 | orchestrator | Tuesday 17 March 2026 01:00:48 +0000 (0:00:00.373) 0:00:04.566 ********* 2026-03-17 01:03:58.152086 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:03:58.152091 | orchestrator | 2026-03-17 01:03:58.152095 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-17 01:03:58.152098 | orchestrator | Tuesday 17 March 2026 01:00:49 +0000 (0:00:00.783) 0:00:05.349 ********* 2026-03-17 01:03:58.152123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 01:03:58.152135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 01:03:58.152140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 01:03:58.152145 | orchestrator | 2026-03-17 01:03:58.152158 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-17 01:03:58.152162 | orchestrator | Tuesday 17 March 2026 01:00:52 +0000 (0:00:03.320) 0:00:08.669 ********* 2026-03-17 01:03:58.152166 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152170 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152177 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.152181 | orchestrator | 2026-03-17 01:03:58.152185 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-17 01:03:58.152189 | orchestrator | Tuesday 17 March 2026 01:00:53 +0000 (0:00:00.603) 0:00:09.273 ********* 2026-03-17 01:03:58.152192 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152196 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152200 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.152204 | orchestrator | 2026-03-17 01:03:58.152207 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-17 01:03:58.152211 | orchestrator | Tuesday 17 March 2026 01:00:54 +0000 (0:00:01.690) 0:00:10.964 ********* 2026-03-17 01:03:58.152218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 01:03:58.152225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 01:03:58.152244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 01:03:58.152252 | orchestrator | 2026-03-17 01:03:58.152258 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-17 01:03:58.152265 | orchestrator | Tuesday 17 March 2026 01:00:58 +0000 (0:00:03.321) 0:00:14.285 ********* 2026-03-17 01:03:58.152271 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152276 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152282 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.152288 | orchestrator | 2026-03-17 01:03:58.152294 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-17 01:03:58.152300 | orchestrator | Tuesday 17 March 2026 01:00:59 +0000 (0:00:00.960) 0:00:15.246 ********* 2026-03-17 01:03:58.152307 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.152313 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:03:58.152320 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:03:58.152326 | orchestrator | 2026-03-17 01:03:58.152332 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 01:03:58.152339 | orchestrator | Tuesday 17 March 2026 01:01:02 +0000 (0:00:03.774) 0:00:19.020 ********* 2026-03-17 01:03:58.152345 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:03:58.152351 | orchestrator | 2026-03-17 01:03:58.152358 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-17 01:03:58.152363 | orchestrator | Tuesday 17 March 2026 01:01:03 +0000 (0:00:00.638) 0:00:19.659 ********* 2026-03-17 01:03:58.152377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152395 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152409 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152423 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152427 | orchestrator | 2026-03-17 01:03:58.152431 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-17 01:03:58.152435 | orchestrator | Tuesday 17 March 2026 01:01:06 +0000 (0:00:02.987) 0:00:22.646 ********* 2026-03-17 01:03:58.152441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152445 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152459 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152469 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152473 | orchestrator | 2026-03-17 01:03:58.152477 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-17 01:03:58.152481 | orchestrator | Tuesday 17 March 2026 01:01:08 +0000 (0:00:02.302) 0:00:24.948 ********* 2026-03-17 01:03:58.152485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152492 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152504 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152519 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152525 | orchestrator | 2026-03-17 01:03:58.152530 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-17 01:03:58.152537 | orchestrator | Tuesday 17 March 2026 01:01:11 +0000 (0:00:03.119) 0:00:28.068 ********* 2026-03-17 01:03:58.152551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 01:03:58.152561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 01:03:58.152577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-17 01:03:58.152583 | orchestrator | 2026-03-17 01:03:58.152590 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-17 01:03:58.152596 | orchestrator | Tuesday 17 March 2026 01:01:15 +0000 (0:00:03.706) 0:00:31.774 ********* 2026-03-17 01:03:58.152602 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:03:58.152608 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:03:58.152615 | orchestrator | } 2026-03-17 01:03:58.152620 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:03:58.152626 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:03:58.152633 | orchestrator | } 2026-03-17 01:03:58.152653 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:03:58.152660 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:03:58.152667 | orchestrator | } 2026-03-17 01:03:58.152673 | orchestrator | 2026-03-17 01:03:58.152680 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:03:58.152687 | orchestrator | Tuesday 17 March 2026 01:01:15 +0000 (0:00:00.267) 0:00:32.042 ********* 2026-03-17 01:03:58.152698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152710 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152730 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.152751 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152758 | orchestrator | 2026-03-17 01:03:58.152765 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-17 01:03:58.152771 | orchestrator | Tuesday 17 March 2026 01:01:18 +0000 (0:00:02.789) 0:00:34.831 ********* 2026-03-17 01:03:58.152778 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152785 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152792 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152799 | orchestrator | 2026-03-17 01:03:58.152806 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-17 01:03:58.152810 | orchestrator | Tuesday 17 March 2026 01:01:19 +0000 (0:00:00.407) 0:00:35.239 ********* 2026-03-17 01:03:58.152813 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152817 | orchestrator | 2026-03-17 01:03:58.152821 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-17 01:03:58.152825 | orchestrator | Tuesday 17 March 2026 01:01:19 +0000 (0:00:00.095) 0:00:35.334 ********* 2026-03-17 01:03:58.152829 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152832 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152836 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152840 | orchestrator | 2026-03-17 01:03:58.152844 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-17 01:03:58.152847 | orchestrator | Tuesday 17 March 2026 01:01:19 +0000 (0:00:00.296) 0:00:35.630 ********* 2026-03-17 01:03:58.152854 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152858 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152862 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152866 | orchestrator | 2026-03-17 01:03:58.152869 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-17 01:03:58.152873 | orchestrator | Tuesday 17 March 2026 01:01:19 +0000 (0:00:00.315) 0:00:35.946 ********* 2026-03-17 01:03:58.152877 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152881 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152885 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152888 | orchestrator | 2026-03-17 01:03:58.152892 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-17 01:03:58.152896 | orchestrator | Tuesday 17 March 2026 01:01:20 +0000 (0:00:00.327) 0:00:36.273 ********* 2026-03-17 01:03:58.152900 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152904 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152907 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152911 | orchestrator | 2026-03-17 01:03:58.152915 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-17 01:03:58.152919 | orchestrator | Tuesday 17 March 2026 01:01:20 +0000 (0:00:00.546) 0:00:36.819 ********* 2026-03-17 01:03:58.152928 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152932 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152935 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152939 | orchestrator | 2026-03-17 01:03:58.152943 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-17 01:03:58.152947 | orchestrator | Tuesday 17 March 2026 01:01:21 +0000 (0:00:00.290) 0:00:37.110 ********* 2026-03-17 01:03:58.152950 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.152954 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.152958 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.152962 | orchestrator | 2026-03-17 01:03:58.152966 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-17 01:03:58.152970 | orchestrator | Tuesday 17 March 2026 01:01:21 +0000 (0:00:00.282) 0:00:37.393 ********* 2026-03-17 01:03:58.152973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-17 01:03:58.152977 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-17 01:03:58.152981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-17 01:03:58.152985 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-17 01:03:58.152988 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-17 01:03:58.152992 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-17 01:03:58.152996 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153002 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153006 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-17 01:03:58.153010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-17 01:03:58.153014 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-17 01:03:58.153018 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153021 | orchestrator | 2026-03-17 01:03:58.153025 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-17 01:03:58.153029 | orchestrator | Tuesday 17 March 2026 01:01:21 +0000 (0:00:00.335) 0:00:37.728 ********* 2026-03-17 01:03:58.153033 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153037 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153040 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153044 | orchestrator | 2026-03-17 01:03:58.153048 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-17 01:03:58.153052 | orchestrator | Tuesday 17 March 2026 01:01:22 +0000 (0:00:00.476) 0:00:38.205 ********* 2026-03-17 01:03:58.153056 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153059 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153063 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153067 | orchestrator | 2026-03-17 01:03:58.153071 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-17 01:03:58.153074 | orchestrator | Tuesday 17 March 2026 01:01:22 +0000 (0:00:00.286) 0:00:38.491 ********* 2026-03-17 01:03:58.153078 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153082 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153086 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153089 | orchestrator | 2026-03-17 01:03:58.153093 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-17 01:03:58.153097 | orchestrator | Tuesday 17 March 2026 01:01:22 +0000 (0:00:00.283) 0:00:38.774 ********* 2026-03-17 01:03:58.153101 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153105 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153108 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153112 | orchestrator | 2026-03-17 01:03:58.153116 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-17 01:03:58.153120 | orchestrator | Tuesday 17 March 2026 01:01:22 +0000 (0:00:00.280) 0:00:39.055 ********* 2026-03-17 01:03:58.153124 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153128 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153134 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153138 | orchestrator | 2026-03-17 01:03:58.153141 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-17 01:03:58.153145 | orchestrator | Tuesday 17 March 2026 01:01:23 +0000 (0:00:00.449) 0:00:39.505 ********* 2026-03-17 01:03:58.153149 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153153 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153156 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153160 | orchestrator | 2026-03-17 01:03:58.153164 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-17 01:03:58.153168 | orchestrator | Tuesday 17 March 2026 01:01:23 +0000 (0:00:00.278) 0:00:39.784 ********* 2026-03-17 01:03:58.153171 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153175 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153179 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153183 | orchestrator | 2026-03-17 01:03:58.153187 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-17 01:03:58.153193 | orchestrator | Tuesday 17 March 2026 01:01:23 +0000 (0:00:00.288) 0:00:40.073 ********* 2026-03-17 01:03:58.153197 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153201 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153204 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153208 | orchestrator | 2026-03-17 01:03:58.153212 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-17 01:03:58.153216 | orchestrator | Tuesday 17 March 2026 01:01:24 +0000 (0:00:00.290) 0:00:40.364 ********* 2026-03-17 01:03:58.153222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.153227 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.153238 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.153251 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153255 | orchestrator | 2026-03-17 01:03:58.153259 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-17 01:03:58.153266 | orchestrator | Tuesday 17 March 2026 01:01:26 +0000 (0:00:02.127) 0:00:42.491 ********* 2026-03-17 01:03:58.153271 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153277 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153282 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153287 | orchestrator | 2026-03-17 01:03:58.153293 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-17 01:03:58.153300 | orchestrator | Tuesday 17 March 2026 01:01:26 +0000 (0:00:00.308) 0:00:42.799 ********* 2026-03-17 01:03:58.153306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.153320 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.153339 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-17 01:03:58.153350 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153354 | orchestrator | 2026-03-17 01:03:58.153358 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-17 01:03:58.153362 | orchestrator | Tuesday 17 March 2026 01:01:28 +0000 (0:00:02.147) 0:00:44.947 ********* 2026-03-17 01:03:58.153365 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153369 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153373 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153377 | orchestrator | 2026-03-17 01:03:58.153381 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-17 01:03:58.153387 | orchestrator | Tuesday 17 March 2026 01:01:29 +0000 (0:00:00.299) 0:00:45.246 ********* 2026-03-17 01:03:58.153391 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153395 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153455 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153460 | orchestrator | 2026-03-17 01:03:58.153464 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-17 01:03:58.153469 | orchestrator | Tuesday 17 March 2026 01:01:29 +0000 (0:00:00.473) 0:00:45.720 ********* 2026-03-17 01:03:58.153472 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153476 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153480 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153486 | orchestrator | 2026-03-17 01:03:58.153492 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-17 01:03:58.153498 | orchestrator | Tuesday 17 March 2026 01:01:29 +0000 (0:00:00.291) 0:00:46.012 ********* 2026-03-17 01:03:58.153504 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153510 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153515 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153521 | orchestrator | 2026-03-17 01:03:58.153527 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-17 01:03:58.153533 | orchestrator | Tuesday 17 March 2026 01:01:30 +0000 (0:00:00.502) 0:00:46.515 ********* 2026-03-17 01:03:58.153540 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153546 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153553 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153559 | orchestrator | 2026-03-17 01:03:58.153566 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-17 01:03:58.153570 | orchestrator | Tuesday 17 March 2026 01:01:30 +0000 (0:00:00.483) 0:00:46.998 ********* 2026-03-17 01:03:58.153579 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.153583 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:03:58.153587 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:03:58.153591 | orchestrator | 2026-03-17 01:03:58.153595 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-17 01:03:58.153599 | orchestrator | Tuesday 17 March 2026 01:01:32 +0000 (0:00:01.107) 0:00:48.106 ********* 2026-03-17 01:03:58.153602 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.153606 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:58.153610 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:58.153615 | orchestrator | 2026-03-17 01:03:58.153621 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-17 01:03:58.153627 | orchestrator | Tuesday 17 March 2026 01:01:32 +0000 (0:00:00.322) 0:00:48.429 ********* 2026-03-17 01:03:58.153647 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.153654 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:58.153664 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:58.153670 | orchestrator | 2026-03-17 01:03:58.153677 | orchestrator | TASK [mariadb :2026-03-17 01:03:58 | INFO  | Task 2bfa0ed8-70fd-4645-8735-7d1bd6199954 is in state SUCCESS 2026-03-17 01:03:58.153682 | orchestrator | Check MariaDB service port liveness] *************************** 2026-03-17 01:03:58.153686 | orchestrator | Tuesday 17 March 2026 01:01:32 +0000 (0:00:00.303) 0:00:48.733 ********* 2026-03-17 01:03:58.153691 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-17 01:03:58.153695 | orchestrator | ...ignoring 2026-03-17 01:03:58.153699 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-17 01:03:58.153703 | orchestrator | ...ignoring 2026-03-17 01:03:58.153706 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-17 01:03:58.153710 | orchestrator | ...ignoring 2026-03-17 01:03:58.153714 | orchestrator | 2026-03-17 01:03:58.153718 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-17 01:03:58.153722 | orchestrator | Tuesday 17 March 2026 01:01:43 +0000 (0:00:10.799) 0:00:59.532 ********* 2026-03-17 01:03:58.153726 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.153732 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:58.153739 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:58.153745 | orchestrator | 2026-03-17 01:03:58.153751 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-17 01:03:58.153757 | orchestrator | Tuesday 17 March 2026 01:01:43 +0000 (0:00:00.481) 0:01:00.014 ********* 2026-03-17 01:03:58.153763 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153769 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153776 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153783 | orchestrator | 2026-03-17 01:03:58.153790 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-17 01:03:58.153797 | orchestrator | Tuesday 17 March 2026 01:01:44 +0000 (0:00:00.286) 0:01:00.300 ********* 2026-03-17 01:03:58.153803 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153809 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153813 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153817 | orchestrator | 2026-03-17 01:03:58.153821 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-17 01:03:58.153825 | orchestrator | Tuesday 17 March 2026 01:01:44 +0000 (0:00:00.287) 0:01:00.588 ********* 2026-03-17 01:03:58.153829 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153832 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153836 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153840 | orchestrator | 2026-03-17 01:03:58.153847 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-17 01:03:58.153855 | orchestrator | Tuesday 17 March 2026 01:01:44 +0000 (0:00:00.300) 0:01:00.889 ********* 2026-03-17 01:03:58.153859 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.153863 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:58.153867 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:58.153870 | orchestrator | 2026-03-17 01:03:58.153874 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-17 01:03:58.153882 | orchestrator | Tuesday 17 March 2026 01:01:45 +0000 (0:00:00.465) 0:01:01.355 ********* 2026-03-17 01:03:58.153886 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.153890 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153894 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153898 | orchestrator | 2026-03-17 01:03:58.153902 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 01:03:58.153906 | orchestrator | Tuesday 17 March 2026 01:01:45 +0000 (0:00:00.297) 0:01:01.653 ********* 2026-03-17 01:03:58.153909 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.153913 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.153917 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-17 01:03:58.153987 | orchestrator | 2026-03-17 01:03:58.153991 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-17 01:03:58.153995 | orchestrator | Tuesday 17 March 2026 01:01:45 +0000 (0:00:00.369) 0:01:02.023 ********* 2026-03-17 01:03:58.153999 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.154003 | orchestrator | 2026-03-17 01:03:58.154007 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-17 01:03:58.154011 | orchestrator | Tuesday 17 March 2026 01:01:56 +0000 (0:00:10.732) 0:01:12.755 ********* 2026-03-17 01:03:58.154044 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.154048 | orchestrator | 2026-03-17 01:03:58.154051 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-17 01:03:58.154055 | orchestrator | Tuesday 17 March 2026 01:01:56 +0000 (0:00:00.132) 0:01:12.888 ********* 2026-03-17 01:03:58.154059 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.154063 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.154067 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.154070 | orchestrator | 2026-03-17 01:03:58.154074 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-17 01:03:58.154078 | orchestrator | Tuesday 17 March 2026 01:01:57 +0000 (0:00:00.940) 0:01:13.829 ********* 2026-03-17 01:03:58.154082 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.154086 | orchestrator | 2026-03-17 01:03:58.154090 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-17 01:03:58.154093 | orchestrator | Tuesday 17 March 2026 01:02:05 +0000 (0:00:07.408) 0:01:21.237 ********* 2026-03-17 01:03:58.154097 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.154101 | orchestrator | 2026-03-17 01:03:58.154105 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-17 01:03:58.154109 | orchestrator | Tuesday 17 March 2026 01:02:06 +0000 (0:00:01.577) 0:01:22.815 ********* 2026-03-17 01:03:58.154112 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.154116 | orchestrator | 2026-03-17 01:03:58.154120 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-17 01:03:58.154124 | orchestrator | Tuesday 17 March 2026 01:02:09 +0000 (0:00:02.462) 0:01:25.277 ********* 2026-03-17 01:03:58.154128 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.154132 | orchestrator | 2026-03-17 01:03:58.154136 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-17 01:03:58.154140 | orchestrator | Tuesday 17 March 2026 01:02:09 +0000 (0:00:00.110) 0:01:25.388 ********* 2026-03-17 01:03:58.154143 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.154147 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.154151 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.154157 | orchestrator | 2026-03-17 01:03:58.154174 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-17 01:03:58.154182 | orchestrator | Tuesday 17 March 2026 01:02:09 +0000 (0:00:00.386) 0:01:25.774 ********* 2026-03-17 01:03:58.154188 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.154194 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:03:58.154200 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:03:58.154205 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-17 01:03:58.154211 | orchestrator | 2026-03-17 01:03:58.154217 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-17 01:03:58.154223 | orchestrator | skipping: no hosts matched 2026-03-17 01:03:58.154229 | orchestrator | 2026-03-17 01:03:58.154235 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-17 01:03:58.154241 | orchestrator | 2026-03-17 01:03:58.154247 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-17 01:03:58.154254 | orchestrator | Tuesday 17 March 2026 01:02:09 +0000 (0:00:00.299) 0:01:26.074 ********* 2026-03-17 01:03:58.154260 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:03:58.154266 | orchestrator | 2026-03-17 01:03:58.154272 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-17 01:03:58.154278 | orchestrator | Tuesday 17 March 2026 01:02:32 +0000 (0:00:22.079) 0:01:48.153 ********* 2026-03-17 01:03:58.154285 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:58.154291 | orchestrator | 2026-03-17 01:03:58.154297 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-17 01:03:58.154304 | orchestrator | Tuesday 17 March 2026 01:02:42 +0000 (0:00:10.559) 0:01:58.713 ********* 2026-03-17 01:03:58.154310 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:58.154317 | orchestrator | 2026-03-17 01:03:58.154321 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-17 01:03:58.154325 | orchestrator | 2026-03-17 01:03:58.154329 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-17 01:03:58.154333 | orchestrator | Tuesday 17 March 2026 01:02:45 +0000 (0:00:02.508) 0:02:01.221 ********* 2026-03-17 01:03:58.154361 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:03:58.154371 | orchestrator | 2026-03-17 01:03:58.154377 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-17 01:03:58.154384 | orchestrator | Tuesday 17 March 2026 01:03:00 +0000 (0:00:15.547) 0:02:16.769 ********* 2026-03-17 01:03:58.154389 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:58.154395 | orchestrator | 2026-03-17 01:03:58.154400 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-17 01:03:58.154406 | orchestrator | Tuesday 17 March 2026 01:03:16 +0000 (0:00:15.572) 0:02:32.341 ********* 2026-03-17 01:03:58.154411 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:58.154417 | orchestrator | 2026-03-17 01:03:58.154431 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-17 01:03:58.154444 | orchestrator | 2026-03-17 01:03:58.154450 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-17 01:03:58.154455 | orchestrator | Tuesday 17 March 2026 01:03:18 +0000 (0:00:02.236) 0:02:34.578 ********* 2026-03-17 01:03:58.154461 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.154466 | orchestrator | 2026-03-17 01:03:58.154473 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-17 01:03:58.154478 | orchestrator | Tuesday 17 March 2026 01:03:29 +0000 (0:00:10.541) 0:02:45.120 ********* 2026-03-17 01:03:58.154484 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.154490 | orchestrator | 2026-03-17 01:03:58.154496 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-17 01:03:58.154501 | orchestrator | Tuesday 17 March 2026 01:03:33 +0000 (0:00:04.584) 0:02:49.705 ********* 2026-03-17 01:03:58.154507 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.154513 | orchestrator | 2026-03-17 01:03:58.154519 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-17 01:03:58.154529 | orchestrator | 2026-03-17 01:03:58.154535 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-17 01:03:58.154541 | orchestrator | Tuesday 17 March 2026 01:03:36 +0000 (0:00:02.491) 0:02:52.196 ********* 2026-03-17 01:03:58.154547 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:03:58.154553 | orchestrator | 2026-03-17 01:03:58.154559 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-17 01:03:58.154565 | orchestrator | Tuesday 17 March 2026 01:03:36 +0000 (0:00:00.453) 0:02:52.650 ********* 2026-03-17 01:03:58.154571 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.154578 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.154584 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.154589 | orchestrator | 2026-03-17 01:03:58.154595 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-17 01:03:58.154601 | orchestrator | Tuesday 17 March 2026 01:03:39 +0000 (0:00:02.464) 0:02:55.114 ********* 2026-03-17 01:03:58.154607 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.154612 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.154618 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.154623 | orchestrator | 2026-03-17 01:03:58.154629 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-17 01:03:58.154654 | orchestrator | Tuesday 17 March 2026 01:03:41 +0000 (0:00:02.168) 0:02:57.282 ********* 2026-03-17 01:03:58.154660 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.154666 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.154680 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.154686 | orchestrator | 2026-03-17 01:03:58.154693 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-17 01:03:58.154701 | orchestrator | Tuesday 17 March 2026 01:03:43 +0000 (0:00:02.083) 0:02:59.366 ********* 2026-03-17 01:03:58.154707 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.154712 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.154717 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:03:58.154723 | orchestrator | 2026-03-17 01:03:58.154728 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-17 01:03:58.154735 | orchestrator | Tuesday 17 March 2026 01:03:45 +0000 (0:00:02.738) 0:03:02.104 ********* 2026-03-17 01:03:58.154740 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.154747 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:58.154753 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:58.154758 | orchestrator | 2026-03-17 01:03:58.154764 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-17 01:03:58.154770 | orchestrator | Tuesday 17 March 2026 01:03:49 +0000 (0:00:03.888) 0:03:05.993 ********* 2026-03-17 01:03:58.154776 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.154782 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.154788 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.154794 | orchestrator | 2026-03-17 01:03:58.154800 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-17 01:03:58.154807 | orchestrator | Tuesday 17 March 2026 01:03:51 +0000 (0:00:01.800) 0:03:07.793 ********* 2026-03-17 01:03:58.154814 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.154819 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.154825 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.154831 | orchestrator | 2026-03-17 01:03:58.154837 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-17 01:03:58.154844 | orchestrator | Tuesday 17 March 2026 01:03:52 +0000 (0:00:00.446) 0:03:08.239 ********* 2026-03-17 01:03:58.154850 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:03:58.154856 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:03:58.154861 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:03:58.154867 | orchestrator | 2026-03-17 01:03:58.154873 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-17 01:03:58.154884 | orchestrator | Tuesday 17 March 2026 01:03:54 +0000 (0:00:02.578) 0:03:10.818 ********* 2026-03-17 01:03:58.154890 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:03:58.154895 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:03:58.154901 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:03:58.154907 | orchestrator | 2026-03-17 01:03:58.154913 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:03:58.154920 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-17 01:03:58.154926 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-03-17 01:03:58.154933 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-03-17 01:03:58.154945 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-03-17 01:03:58.154952 | orchestrator | 2026-03-17 01:03:58.154958 | orchestrator | 2026-03-17 01:03:58.154964 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:03:58.154970 | orchestrator | Tuesday 17 March 2026 01:03:54 +0000 (0:00:00.235) 0:03:11.053 ********* 2026-03-17 01:03:58.154976 | orchestrator | =============================================================================== 2026-03-17 01:03:58.154982 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.63s 2026-03-17 01:03:58.154988 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.13s 2026-03-17 01:03:58.154994 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.80s 2026-03-17 01:03:58.155000 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.73s 2026-03-17 01:03:58.155005 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.54s 2026-03-17 01:03:58.155011 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.41s 2026-03-17 01:03:58.155017 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.74s 2026-03-17 01:03:58.155022 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2026-03-17 01:03:58.155028 | orchestrator | service-check : mariadb | Get container facts --------------------------- 3.89s 2026-03-17 01:03:58.155034 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.77s 2026-03-17 01:03:58.155040 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.71s 2026-03-17 01:03:58.155046 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.32s 2026-03-17 01:03:58.155052 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.32s 2026-03-17 01:03:58.155057 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.12s 2026-03-17 01:03:58.155063 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.99s 2026-03-17 01:03:58.155069 | orchestrator | Check MariaDB service --------------------------------------------------- 2.99s 2026-03-17 01:03:58.155075 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.79s 2026-03-17 01:03:58.155081 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.74s 2026-03-17 01:03:58.155091 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.58s 2026-03-17 01:03:58.155098 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.49s 2026-03-17 01:03:58.155105 | orchestrator | 2026-03-17 01:03:58 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:03:58.155112 | orchestrator | 2026-03-17 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:01.193915 | orchestrator | 2026-03-17 01:04:01 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:01.194223 | orchestrator | 2026-03-17 01:04:01 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:01.195107 | orchestrator | 2026-03-17 01:04:01 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:01.195134 | orchestrator | 2026-03-17 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:04.224145 | orchestrator | 2026-03-17 01:04:04 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:04.224200 | orchestrator | 2026-03-17 01:04:04 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:04.226053 | orchestrator | 2026-03-17 01:04:04 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:04.226103 | orchestrator | 2026-03-17 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:07.260790 | orchestrator | 2026-03-17 01:04:07 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:07.261279 | orchestrator | 2026-03-17 01:04:07 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:07.261989 | orchestrator | 2026-03-17 01:04:07 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:07.262044 | orchestrator | 2026-03-17 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:10.288615 | orchestrator | 2026-03-17 01:04:10 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:10.290550 | orchestrator | 2026-03-17 01:04:10 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:10.291461 | orchestrator | 2026-03-17 01:04:10 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:10.291528 | orchestrator | 2026-03-17 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:13.319092 | orchestrator | 2026-03-17 01:04:13 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:13.319602 | orchestrator | 2026-03-17 01:04:13 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:13.320458 | orchestrator | 2026-03-17 01:04:13 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:13.320491 | orchestrator | 2026-03-17 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:16.353304 | orchestrator | 2026-03-17 01:04:16 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:16.357385 | orchestrator | 2026-03-17 01:04:16 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:16.361343 | orchestrator | 2026-03-17 01:04:16 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:16.362058 | orchestrator | 2026-03-17 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:19.392902 | orchestrator | 2026-03-17 01:04:19 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:19.394875 | orchestrator | 2026-03-17 01:04:19 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:19.395546 | orchestrator | 2026-03-17 01:04:19 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:19.395577 | orchestrator | 2026-03-17 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:22.424907 | orchestrator | 2026-03-17 01:04:22 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:22.425423 | orchestrator | 2026-03-17 01:04:22 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:22.426312 | orchestrator | 2026-03-17 01:04:22 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:22.426394 | orchestrator | 2026-03-17 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:25.463638 | orchestrator | 2026-03-17 01:04:25 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:25.464091 | orchestrator | 2026-03-17 01:04:25 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:25.465387 | orchestrator | 2026-03-17 01:04:25 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:25.465437 | orchestrator | 2026-03-17 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:28.496639 | orchestrator | 2026-03-17 01:04:28 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:28.497912 | orchestrator | 2026-03-17 01:04:28 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:28.499894 | orchestrator | 2026-03-17 01:04:28 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:28.499938 | orchestrator | 2026-03-17 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:31.534531 | orchestrator | 2026-03-17 01:04:31 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:31.535928 | orchestrator | 2026-03-17 01:04:31 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:31.537744 | orchestrator | 2026-03-17 01:04:31 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:31.538001 | orchestrator | 2026-03-17 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:34.578793 | orchestrator | 2026-03-17 01:04:34 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:34.580400 | orchestrator | 2026-03-17 01:04:34 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:34.583220 | orchestrator | 2026-03-17 01:04:34 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:34.583265 | orchestrator | 2026-03-17 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:37.629914 | orchestrator | 2026-03-17 01:04:37 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state STARTED 2026-03-17 01:04:37.631323 | orchestrator | 2026-03-17 01:04:37 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:37.632839 | orchestrator | 2026-03-17 01:04:37 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:37.632875 | orchestrator | 2026-03-17 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:40.677904 | orchestrator | 2026-03-17 01:04:40 | INFO  | Task 7a381696-5fbd-4890-9b9c-ec861d995aa1 is in state SUCCESS 2026-03-17 01:04:40.679134 | orchestrator | 2026-03-17 01:04:40.679189 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 01:04:40.679198 | orchestrator | 2.16.14 2026-03-17 01:04:40.679206 | orchestrator | 2026-03-17 01:04:40.679213 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-17 01:04:40.679221 | orchestrator | 2026-03-17 01:04:40.679227 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-17 01:04:40.679234 | orchestrator | Tuesday 17 March 2026 01:02:39 +0000 (0:00:00.459) 0:00:00.459 ********* 2026-03-17 01:04:40.679240 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:04:40.679258 | orchestrator | 2026-03-17 01:04:40.679262 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-17 01:04:40.679266 | orchestrator | Tuesday 17 March 2026 01:02:39 +0000 (0:00:00.428) 0:00:00.888 ********* 2026-03-17 01:04:40.679270 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.679274 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.679278 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.679282 | orchestrator | 2026-03-17 01:04:40.679286 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-17 01:04:40.679290 | orchestrator | Tuesday 17 March 2026 01:02:40 +0000 (0:00:00.956) 0:00:01.845 ********* 2026-03-17 01:04:40.679294 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.679297 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.679301 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.679305 | orchestrator | 2026-03-17 01:04:40.679309 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-17 01:04:40.679313 | orchestrator | Tuesday 17 March 2026 01:02:41 +0000 (0:00:00.233) 0:00:02.079 ********* 2026-03-17 01:04:40.679317 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.679320 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.679326 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.679332 | orchestrator | 2026-03-17 01:04:40.679338 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-17 01:04:40.679345 | orchestrator | Tuesday 17 March 2026 01:02:41 +0000 (0:00:00.691) 0:00:02.770 ********* 2026-03-17 01:04:40.679351 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.679356 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.679362 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.679367 | orchestrator | 2026-03-17 01:04:40.679373 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-17 01:04:40.679379 | orchestrator | Tuesday 17 March 2026 01:02:42 +0000 (0:00:00.259) 0:00:03.030 ********* 2026-03-17 01:04:40.679385 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.679390 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.679396 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.679401 | orchestrator | 2026-03-17 01:04:40.679415 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-17 01:04:40.679422 | orchestrator | Tuesday 17 March 2026 01:02:42 +0000 (0:00:00.246) 0:00:03.276 ********* 2026-03-17 01:04:40.679428 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.679435 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.679441 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.679447 | orchestrator | 2026-03-17 01:04:40.679455 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-17 01:04:40.679460 | orchestrator | Tuesday 17 March 2026 01:02:42 +0000 (0:00:00.269) 0:00:03.546 ********* 2026-03-17 01:04:40.679466 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.679474 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.679707 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.679723 | orchestrator | 2026-03-17 01:04:40.679727 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-17 01:04:40.679731 | orchestrator | Tuesday 17 March 2026 01:02:42 +0000 (0:00:00.381) 0:00:03.927 ********* 2026-03-17 01:04:40.679735 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.679739 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.679745 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.679752 | orchestrator | 2026-03-17 01:04:40.679761 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-17 01:04:40.679769 | orchestrator | Tuesday 17 March 2026 01:02:43 +0000 (0:00:00.257) 0:00:04.185 ********* 2026-03-17 01:04:40.679776 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:04:40.679782 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:04:40.679797 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:04:40.679803 | orchestrator | 2026-03-17 01:04:40.679809 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-17 01:04:40.679814 | orchestrator | Tuesday 17 March 2026 01:02:43 +0000 (0:00:00.570) 0:00:04.755 ********* 2026-03-17 01:04:40.679820 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.679827 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.679834 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.679841 | orchestrator | 2026-03-17 01:04:40.679847 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-17 01:04:40.679853 | orchestrator | Tuesday 17 March 2026 01:02:44 +0000 (0:00:00.363) 0:00:05.119 ********* 2026-03-17 01:04:40.679860 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:04:40.679867 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:04:40.679873 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:04:40.679880 | orchestrator | 2026-03-17 01:04:40.679884 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-17 01:04:40.679888 | orchestrator | Tuesday 17 March 2026 01:02:46 +0000 (0:00:02.866) 0:00:07.985 ********* 2026-03-17 01:04:40.679892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 01:04:40.679896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 01:04:40.679900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 01:04:40.679906 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.679958 | orchestrator | 2026-03-17 01:04:40.679976 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-17 01:04:40.679982 | orchestrator | Tuesday 17 March 2026 01:02:47 +0000 (0:00:00.397) 0:00:08.383 ********* 2026-03-17 01:04:40.679988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.679993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.679997 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.680003 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680009 | orchestrator | 2026-03-17 01:04:40.680015 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-17 01:04:40.680021 | orchestrator | Tuesday 17 March 2026 01:02:48 +0000 (0:00:00.778) 0:00:09.161 ********* 2026-03-17 01:04:40.680028 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.680228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.680247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.680260 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680266 | orchestrator | 2026-03-17 01:04:40.680277 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-17 01:04:40.680286 | orchestrator | Tuesday 17 March 2026 01:02:48 +0000 (0:00:00.157) 0:00:09.318 ********* 2026-03-17 01:04:40.680293 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6ef57ab476ad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-17 01:02:45.049927', 'end': '2026-03-17 01:02:45.087643', 'delta': '0:00:00.037716', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6ef57ab476ad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-17 01:04:40.680300 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '56f2ea22de30', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-17 01:02:46.044811', 'end': '2026-03-17 01:02:46.077045', 'delta': '0:00:00.032234', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['56f2ea22de30'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-17 01:04:40.680333 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '93fe2648bebe', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-17 01:02:46.803660', 'end': '2026-03-17 01:02:46.831909', 'delta': '0:00:00.028249', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['93fe2648bebe'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-17 01:04:40.680340 | orchestrator | 2026-03-17 01:04:40.680346 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-17 01:04:40.680351 | orchestrator | Tuesday 17 March 2026 01:02:48 +0000 (0:00:00.376) 0:00:09.695 ********* 2026-03-17 01:04:40.680357 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.680364 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.680370 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.680376 | orchestrator | 2026-03-17 01:04:40.680382 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-17 01:04:40.680388 | orchestrator | Tuesday 17 March 2026 01:02:49 +0000 (0:00:00.415) 0:00:10.111 ********* 2026-03-17 01:04:40.680393 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-17 01:04:40.680400 | orchestrator | 2026-03-17 01:04:40.680405 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-17 01:04:40.680411 | orchestrator | Tuesday 17 March 2026 01:02:50 +0000 (0:00:01.860) 0:00:11.971 ********* 2026-03-17 01:04:40.680459 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680464 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680468 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680472 | orchestrator | 2026-03-17 01:04:40.680476 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-17 01:04:40.680480 | orchestrator | Tuesday 17 March 2026 01:02:51 +0000 (0:00:00.283) 0:00:12.255 ********* 2026-03-17 01:04:40.680484 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680488 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680491 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680495 | orchestrator | 2026-03-17 01:04:40.680503 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 01:04:40.680507 | orchestrator | Tuesday 17 March 2026 01:02:51 +0000 (0:00:00.368) 0:00:12.623 ********* 2026-03-17 01:04:40.680510 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680514 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680518 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680522 | orchestrator | 2026-03-17 01:04:40.680526 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-17 01:04:40.680530 | orchestrator | Tuesday 17 March 2026 01:02:51 +0000 (0:00:00.346) 0:00:12.970 ********* 2026-03-17 01:04:40.680534 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.680537 | orchestrator | 2026-03-17 01:04:40.680541 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-17 01:04:40.680545 | orchestrator | Tuesday 17 March 2026 01:02:52 +0000 (0:00:00.129) 0:00:13.099 ********* 2026-03-17 01:04:40.680549 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680553 | orchestrator | 2026-03-17 01:04:40.680557 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-17 01:04:40.680560 | orchestrator | Tuesday 17 March 2026 01:02:52 +0000 (0:00:00.207) 0:00:13.307 ********* 2026-03-17 01:04:40.680564 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680568 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680572 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680576 | orchestrator | 2026-03-17 01:04:40.680582 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-17 01:04:40.680665 | orchestrator | Tuesday 17 March 2026 01:02:52 +0000 (0:00:00.263) 0:00:13.571 ********* 2026-03-17 01:04:40.680676 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680683 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680689 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680695 | orchestrator | 2026-03-17 01:04:40.680702 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-17 01:04:40.680708 | orchestrator | Tuesday 17 March 2026 01:02:52 +0000 (0:00:00.278) 0:00:13.849 ********* 2026-03-17 01:04:40.680715 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680719 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680723 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680727 | orchestrator | 2026-03-17 01:04:40.680730 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-17 01:04:40.680734 | orchestrator | Tuesday 17 March 2026 01:02:53 +0000 (0:00:00.434) 0:00:14.284 ********* 2026-03-17 01:04:40.680738 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680742 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680745 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680749 | orchestrator | 2026-03-17 01:04:40.680753 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-17 01:04:40.680757 | orchestrator | Tuesday 17 March 2026 01:02:53 +0000 (0:00:00.277) 0:00:14.562 ********* 2026-03-17 01:04:40.680760 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680764 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680768 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680772 | orchestrator | 2026-03-17 01:04:40.680775 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-17 01:04:40.680784 | orchestrator | Tuesday 17 March 2026 01:02:53 +0000 (0:00:00.259) 0:00:14.821 ********* 2026-03-17 01:04:40.680788 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680791 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680795 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680820 | orchestrator | 2026-03-17 01:04:40.680825 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-17 01:04:40.680829 | orchestrator | Tuesday 17 March 2026 01:02:54 +0000 (0:00:00.281) 0:00:15.102 ********* 2026-03-17 01:04:40.680833 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.680837 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.680841 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.680845 | orchestrator | 2026-03-17 01:04:40.680849 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-17 01:04:40.680855 | orchestrator | Tuesday 17 March 2026 01:02:54 +0000 (0:00:00.394) 0:00:15.497 ********* 2026-03-17 01:04:40.680863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16ca22cf--64f9--579d--994c--d43933026c5f-osd--block--16ca22cf--64f9--579d--994c--d43933026c5f', 'dm-uuid-LVM-y2HbUUaZfCONiEzQN3cazUkYUoAkrZdHW8PKjpGId1qTLuMh3ALH0t52wbEKMY8J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5-osd--block--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5', 'dm-uuid-LVM-JHeqYSnhBZTczYlYzdSyJxeUPOE5DyFmwNGrA98SMV8wmMFvK1WpqrCejcqRorYA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d77b95b6--dc37--5eed--9a6e--c7871424e120-osd--block--d77b95b6--dc37--5eed--9a6e--c7871424e120', 'dm-uuid-LVM-HqNVUzr8tfZe3LbFOrpJzLVzQO0BoGHOfw6I8RT5B3XStRo9OHByj7YlEavSR3LT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ec88a4df--1f79--596d--b281--118c477c78df-osd--block--ec88a4df--1f79--596d--b281--118c477c78df', 'dm-uuid-LVM-jWrHNBceoo0lz8m0pcwMKXx2PYvwcJVmqiWNOrWp1aheViUA724rHCoEH3YjDjN0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.680995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--16ca22cf--64f9--579d--994c--d43933026c5f-osd--block--16ca22cf--64f9--579d--994c--d43933026c5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cDgNKN-65o9-GCYm-jd5N-jxY5-Xwfs-AuB9us', 'scsi-0QEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1', 'scsi-SQEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5-osd--block--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SXu2t4-xlmT-nWR5-Vn1s-LLKz-MhzX-OInbL9', 'scsi-0QEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184', 'scsi-SQEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d', 'scsi-SQEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681077 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.681084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part1', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part14', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part15', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part16', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50c44467--b3f7--539a--99b7--df2211d1583b-osd--block--50c44467--b3f7--539a--99b7--df2211d1583b', 'dm-uuid-LVM-iBPoFze9hkTVnKW4shdae6O6KrVi6HnK8GsOucTdh8eWFD4mzU14n9FDjGCSir6w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d77b95b6--dc37--5eed--9a6e--c7871424e120-osd--block--d77b95b6--dc37--5eed--9a6e--c7871424e120'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YEn508-grn6-JU5N-zREC-OznN-9GB5-smBjJ5', 'scsi-0QEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235', 'scsi-SQEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9465b490--647b--5adb--8e2e--a5649c4bc673-osd--block--9465b490--647b--5adb--8e2e--a5649c4bc673', 'dm-uuid-LVM-Zam2M2X1xaV047uPshlTJTQeMm2QQ29xiPaMt6CCMJ8QQK5C3Ff1lJKKRu3FerJY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ec88a4df--1f79--596d--b281--118c477c78df-osd--block--ec88a4df--1f79--596d--b281--118c477c78df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gv1WXC-350m-0b7t-fELq-YK9T-Jau5-utKItL', 'scsi-0QEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32', 'scsi-SQEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b', 'scsi-SQEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681190 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.681198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-17 01:04:40.681245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--50c44467--b3f7--539a--99b7--df2211d1583b-osd--block--50c44467--b3f7--539a--99b7--df2211d1583b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zq4wmp-0FMJ-yEfL-PBHg-uBmH-1kra-xy1Esb', 'scsi-0QEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7', 'scsi-SQEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9465b490--647b--5adb--8e2e--a5649c4bc673-osd--block--9465b490--647b--5adb--8e2e--a5649c4bc673'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NjfmJl-xYO1-1oP1-2iIM-GqNQ-TrFA-8xMy2e', 'scsi-0QEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865', 'scsi-SQEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276', 'scsi-SQEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-17 01:04:40.681306 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.681318 | orchestrator | 2026-03-17 01:04:40.681324 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-17 01:04:40.681332 | orchestrator | Tuesday 17 March 2026 01:02:54 +0000 (0:00:00.461) 0:00:15.958 ********* 2026-03-17 01:04:40.681338 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--16ca22cf--64f9--579d--994c--d43933026c5f-osd--block--16ca22cf--64f9--579d--994c--d43933026c5f', 'dm-uuid-LVM-y2HbUUaZfCONiEzQN3cazUkYUoAkrZdHW8PKjpGId1qTLuMh3ALH0t52wbEKMY8J'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5-osd--block--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5', 'dm-uuid-LVM-JHeqYSnhBZTczYlYzdSyJxeUPOE5DyFmwNGrA98SMV8wmMFvK1WpqrCejcqRorYA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681386 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d77b95b6--dc37--5eed--9a6e--c7871424e120-osd--block--d77b95b6--dc37--5eed--9a6e--c7871424e120', 'dm-uuid-LVM-HqNVUzr8tfZe3LbFOrpJzLVzQO0BoGHOfw6I8RT5B3XStRo9OHByj7YlEavSR3LT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681422 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ec88a4df--1f79--596d--b281--118c477c78df-osd--block--ec88a4df--1f79--596d--b281--118c477c78df', 'dm-uuid-LVM-jWrHNBceoo0lz8m0pcwMKXx2PYvwcJVmqiWNOrWp1aheViUA724rHCoEH3YjDjN0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_d1d4b81a-b793-41a0-ad40-9abf2e7492cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681436 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681442 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--16ca22cf--64f9--579d--994c--d43933026c5f-osd--block--16ca22cf--64f9--579d--994c--d43933026c5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cDgNKN-65o9-GCYm-jd5N-jxY5-Xwfs-AuB9us', 'scsi-0QEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1', 'scsi-SQEMU_QEMU_HARDDISK_5cc759d4-bbcf-4791-ab44-d26d1bbabcc1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5-osd--block--b13aeae0--05c6--5bfd--ada4--b68b1762c1d5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SXu2t4-xlmT-nWR5-Vn1s-LLKz-MhzX-OInbL9', 'scsi-0QEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184', 'scsi-SQEMU_QEMU_HARDDISK_3efb5a56-103b-42d9-8866-8efb8a438184'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d', 'scsi-SQEMU_QEMU_HARDDISK_23482283-1618-4112-88d0-516e8abcc23d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681477 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681483 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.681490 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681497 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681510 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681523 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part1', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part14', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part15', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part16', 'scsi-SQEMU_QEMU_HARDDISK_5fc221d6-1f30-457e-9b4e-578a7aeb5c88-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681529 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50c44467--b3f7--539a--99b7--df2211d1583b-osd--block--50c44467--b3f7--539a--99b7--df2211d1583b', 'dm-uuid-LVM-iBPoFze9hkTVnKW4shdae6O6KrVi6HnK8GsOucTdh8eWFD4mzU14n9FDjGCSir6w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681536 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d77b95b6--dc37--5eed--9a6e--c7871424e120-osd--block--d77b95b6--dc37--5eed--9a6e--c7871424e120'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YEn508-grn6-JU5N-zREC-OznN-9GB5-smBjJ5', 'scsi-0QEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235', 'scsi-SQEMU_QEMU_HARDDISK_d717cdad-60c8-49b4-a1ca-e286e86fc235'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9465b490--647b--5adb--8e2e--a5649c4bc673-osd--block--9465b490--647b--5adb--8e2e--a5649c4bc673', 'dm-uuid-LVM-Zam2M2X1xaV047uPshlTJTQeMm2QQ29xiPaMt6CCMJ8QQK5C3Ff1lJKKRu3FerJY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681547 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ec88a4df--1f79--596d--b281--118c477c78df-osd--block--ec88a4df--1f79--596d--b281--118c477c78df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Gv1WXC-350m-0b7t-fELq-YK9T-Jau5-utKItL', 'scsi-0QEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32', 'scsi-SQEMU_QEMU_HARDDISK_d8c7f886-b638-428f-9acd-2bef6a3abd32'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681551 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681555 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b', 'scsi-SQEMU_QEMU_HARDDISK_c18a6eac-daa9-4a49-b877-784985e05b4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681569 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681573 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681577 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.681585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681589 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681628 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681635 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_dcece8a6-a124-4356-af52-fd20405fc0e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681644 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--50c44467--b3f7--539a--99b7--df2211d1583b-osd--block--50c44467--b3f7--539a--99b7--df2211d1583b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zq4wmp-0FMJ-yEfL-PBHg-uBmH-1kra-xy1Esb', 'scsi-0QEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7', 'scsi-SQEMU_QEMU_HARDDISK_d1d144f4-1f7d-43cf-b529-b5ecced41bc7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9465b490--647b--5adb--8e2e--a5649c4bc673-osd--block--9465b490--647b--5adb--8e2e--a5649c4bc673'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NjfmJl-xYO1-1oP1-2iIM-GqNQ-TrFA-8xMy2e', 'scsi-0QEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865', 'scsi-SQEMU_QEMU_HARDDISK_c89d09f1-caef-4162-a829-09cd388ce865'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276', 'scsi-SQEMU_QEMU_HARDDISK_792a3cd6-8361-4aa2-9d0e-e1d89bff3276'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681659 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-17-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-17 01:04:40.681664 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.681668 | orchestrator | 2026-03-17 01:04:40.681671 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-17 01:04:40.681675 | orchestrator | Tuesday 17 March 2026 01:02:55 +0000 (0:00:00.568) 0:00:16.527 ********* 2026-03-17 01:04:40.681683 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.681687 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.681691 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.681695 | orchestrator | 2026-03-17 01:04:40.681699 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-17 01:04:40.681702 | orchestrator | Tuesday 17 March 2026 01:02:56 +0000 (0:00:00.664) 0:00:17.192 ********* 2026-03-17 01:04:40.681706 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.681710 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.681714 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.681718 | orchestrator | 2026-03-17 01:04:40.681721 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 01:04:40.681725 | orchestrator | Tuesday 17 March 2026 01:02:56 +0000 (0:00:00.379) 0:00:17.571 ********* 2026-03-17 01:04:40.681729 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.681734 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.681740 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.681745 | orchestrator | 2026-03-17 01:04:40.681751 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 01:04:40.681756 | orchestrator | Tuesday 17 March 2026 01:02:57 +0000 (0:00:00.676) 0:00:18.247 ********* 2026-03-17 01:04:40.681762 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.681767 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.681773 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.681779 | orchestrator | 2026-03-17 01:04:40.681785 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-17 01:04:40.681791 | orchestrator | Tuesday 17 March 2026 01:02:57 +0000 (0:00:00.257) 0:00:18.504 ********* 2026-03-17 01:04:40.681797 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.681803 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.681808 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.681814 | orchestrator | 2026-03-17 01:04:40.681823 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-17 01:04:40.681830 | orchestrator | Tuesday 17 March 2026 01:02:57 +0000 (0:00:00.341) 0:00:18.846 ********* 2026-03-17 01:04:40.681835 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.681842 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.681848 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.681855 | orchestrator | 2026-03-17 01:04:40.681861 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-17 01:04:40.681868 | orchestrator | Tuesday 17 March 2026 01:02:58 +0000 (0:00:00.398) 0:00:19.244 ********* 2026-03-17 01:04:40.681874 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-17 01:04:40.681882 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-17 01:04:40.681891 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-17 01:04:40.681897 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-17 01:04:40.681903 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-17 01:04:40.681909 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-17 01:04:40.681927 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-17 01:04:40.681934 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-17 01:04:40.681939 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-17 01:04:40.681945 | orchestrator | 2026-03-17 01:04:40.681951 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-17 01:04:40.681958 | orchestrator | Tuesday 17 March 2026 01:02:58 +0000 (0:00:00.739) 0:00:19.984 ********* 2026-03-17 01:04:40.681964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-17 01:04:40.681971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-17 01:04:40.681977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-17 01:04:40.681983 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.681995 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-17 01:04:40.682002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-17 01:04:40.682009 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-17 01:04:40.682155 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.682165 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-17 01:04:40.682171 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-17 01:04:40.682178 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-17 01:04:40.682183 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.682189 | orchestrator | 2026-03-17 01:04:40.682195 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-17 01:04:40.682202 | orchestrator | Tuesday 17 March 2026 01:02:59 +0000 (0:00:00.292) 0:00:20.277 ********* 2026-03-17 01:04:40.682208 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:04:40.682215 | orchestrator | 2026-03-17 01:04:40.682221 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-17 01:04:40.682227 | orchestrator | Tuesday 17 March 2026 01:02:59 +0000 (0:00:00.569) 0:00:20.846 ********* 2026-03-17 01:04:40.682242 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.682250 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.682256 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.682264 | orchestrator | 2026-03-17 01:04:40.682270 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-17 01:04:40.682276 | orchestrator | Tuesday 17 March 2026 01:03:00 +0000 (0:00:00.277) 0:00:21.124 ********* 2026-03-17 01:04:40.682282 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.682288 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.682294 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.682301 | orchestrator | 2026-03-17 01:04:40.682307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-17 01:04:40.682314 | orchestrator | Tuesday 17 March 2026 01:03:00 +0000 (0:00:00.275) 0:00:21.400 ********* 2026-03-17 01:04:40.682320 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.682327 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.682333 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:04:40.682339 | orchestrator | 2026-03-17 01:04:40.682344 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-17 01:04:40.682351 | orchestrator | Tuesday 17 March 2026 01:03:00 +0000 (0:00:00.283) 0:00:21.684 ********* 2026-03-17 01:04:40.682358 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.682364 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.682370 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.682377 | orchestrator | 2026-03-17 01:04:40.682383 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-17 01:04:40.682390 | orchestrator | Tuesday 17 March 2026 01:03:01 +0000 (0:00:00.464) 0:00:22.148 ********* 2026-03-17 01:04:40.682396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:04:40.682402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:04:40.682408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:04:40.682414 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.682420 | orchestrator | 2026-03-17 01:04:40.682427 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-17 01:04:40.682433 | orchestrator | Tuesday 17 March 2026 01:03:01 +0000 (0:00:00.336) 0:00:22.484 ********* 2026-03-17 01:04:40.682439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:04:40.682445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:04:40.682452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:04:40.682458 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.682472 | orchestrator | 2026-03-17 01:04:40.682480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-17 01:04:40.682491 | orchestrator | Tuesday 17 March 2026 01:03:01 +0000 (0:00:00.332) 0:00:22.817 ********* 2026-03-17 01:04:40.682498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:04:40.682504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:04:40.682511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:04:40.682517 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.682523 | orchestrator | 2026-03-17 01:04:40.682529 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-17 01:04:40.682535 | orchestrator | Tuesday 17 March 2026 01:03:02 +0000 (0:00:00.346) 0:00:23.163 ********* 2026-03-17 01:04:40.682541 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:04:40.682548 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:04:40.682554 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:04:40.682561 | orchestrator | 2026-03-17 01:04:40.682568 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-17 01:04:40.682574 | orchestrator | Tuesday 17 March 2026 01:03:02 +0000 (0:00:00.336) 0:00:23.499 ********* 2026-03-17 01:04:40.682581 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-17 01:04:40.682588 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-17 01:04:40.682609 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-17 01:04:40.682616 | orchestrator | 2026-03-17 01:04:40.682623 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-17 01:04:40.682630 | orchestrator | Tuesday 17 March 2026 01:03:02 +0000 (0:00:00.426) 0:00:23.925 ********* 2026-03-17 01:04:40.682636 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:04:40.682644 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:04:40.682650 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:04:40.682657 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 01:04:40.682663 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 01:04:40.682669 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 01:04:40.682676 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 01:04:40.682682 | orchestrator | 2026-03-17 01:04:40.682688 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-17 01:04:40.682695 | orchestrator | Tuesday 17 March 2026 01:03:03 +0000 (0:00:00.830) 0:00:24.756 ********* 2026-03-17 01:04:40.682700 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-17 01:04:40.682707 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-17 01:04:40.682714 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-17 01:04:40.682720 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 01:04:40.682726 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 01:04:40.682732 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 01:04:40.682745 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-17 01:04:40.682751 | orchestrator | 2026-03-17 01:04:40.682757 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-17 01:04:40.682763 | orchestrator | Tuesday 17 March 2026 01:03:05 +0000 (0:00:01.598) 0:00:26.354 ********* 2026-03-17 01:04:40.682769 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:04:40.682775 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:04:40.682781 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-17 01:04:40.682794 | orchestrator | 2026-03-17 01:04:40.682801 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-17 01:04:40.682807 | orchestrator | Tuesday 17 March 2026 01:03:05 +0000 (0:00:00.318) 0:00:26.673 ********* 2026-03-17 01:04:40.682815 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:04:40.682823 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:04:40.682830 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:04:40.682836 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:04:40.682844 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-17 01:04:40.682849 | orchestrator | 2026-03-17 01:04:40.682853 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-17 01:04:40.682858 | orchestrator | Tuesday 17 March 2026 01:03:50 +0000 (0:00:44.608) 0:01:11.281 ********* 2026-03-17 01:04:40.682862 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682869 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682875 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682885 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682891 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682898 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682904 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-17 01:04:40.682910 | orchestrator | 2026-03-17 01:04:40.682916 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-17 01:04:40.682922 | orchestrator | Tuesday 17 March 2026 01:04:12 +0000 (0:00:21.763) 0:01:33.045 ********* 2026-03-17 01:04:40.682928 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682934 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682940 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682947 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682953 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682960 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.682967 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-17 01:04:40.682974 | orchestrator | 2026-03-17 01:04:40.682981 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-17 01:04:40.682992 | orchestrator | Tuesday 17 March 2026 01:04:22 +0000 (0:00:10.631) 0:01:43.676 ********* 2026-03-17 01:04:40.682996 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.683001 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:04:40.683005 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:04:40.683009 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.683014 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:04:40.683022 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:04:40.683027 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.683043 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:04:40.683048 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:04:40.683055 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.683063 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:04:40.683072 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:04:40.683080 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.683086 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:04:40.683092 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:04:40.683099 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-17 01:04:40.683104 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-17 01:04:40.683111 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-17 01:04:40.683117 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-17 01:04:40.683124 | orchestrator | 2026-03-17 01:04:40.683131 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:04:40.683138 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-17 01:04:40.683146 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-17 01:04:40.683154 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-17 01:04:40.683160 | orchestrator | 2026-03-17 01:04:40.683166 | orchestrator | 2026-03-17 01:04:40.683172 | orchestrator | 2026-03-17 01:04:40.683177 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:04:40.683188 | orchestrator | Tuesday 17 March 2026 01:04:37 +0000 (0:00:15.230) 0:01:58.906 ********* 2026-03-17 01:04:40.683194 | orchestrator | =============================================================================== 2026-03-17 01:04:40.683200 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.61s 2026-03-17 01:04:40.683206 | orchestrator | generate keys ---------------------------------------------------------- 21.76s 2026-03-17 01:04:40.683212 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 15.23s 2026-03-17 01:04:40.683218 | orchestrator | get keys from monitors ------------------------------------------------- 10.63s 2026-03-17 01:04:40.683224 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.87s 2026-03-17 01:04:40.683231 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.86s 2026-03-17 01:04:40.683237 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.60s 2026-03-17 01:04:40.683249 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.96s 2026-03-17 01:04:40.683253 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.83s 2026-03-17 01:04:40.683257 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2026-03-17 01:04:40.683260 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.74s 2026-03-17 01:04:40.683264 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.69s 2026-03-17 01:04:40.683268 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-03-17 01:04:40.683272 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.66s 2026-03-17 01:04:40.683276 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.57s 2026-03-17 01:04:40.683279 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.57s 2026-03-17 01:04:40.683283 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.57s 2026-03-17 01:04:40.683287 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.46s 2026-03-17 01:04:40.683291 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.46s 2026-03-17 01:04:40.683294 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.43s 2026-03-17 01:04:40.683298 | orchestrator | 2026-03-17 01:04:40 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:04:40.683302 | orchestrator | 2026-03-17 01:04:40 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:40.683306 | orchestrator | 2026-03-17 01:04:40 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:40.683311 | orchestrator | 2026-03-17 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:43.712901 | orchestrator | 2026-03-17 01:04:43 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:04:43.713293 | orchestrator | 2026-03-17 01:04:43 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:43.714080 | orchestrator | 2026-03-17 01:04:43 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:43.714114 | orchestrator | 2026-03-17 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:46.746273 | orchestrator | 2026-03-17 01:04:46 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:04:46.747584 | orchestrator | 2026-03-17 01:04:46 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:46.748875 | orchestrator | 2026-03-17 01:04:46 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:46.748923 | orchestrator | 2026-03-17 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:49.788161 | orchestrator | 2026-03-17 01:04:49 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:04:49.790010 | orchestrator | 2026-03-17 01:04:49 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:49.791733 | orchestrator | 2026-03-17 01:04:49 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:49.791852 | orchestrator | 2026-03-17 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:52.837174 | orchestrator | 2026-03-17 01:04:52 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:04:52.839315 | orchestrator | 2026-03-17 01:04:52 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:52.840700 | orchestrator | 2026-03-17 01:04:52 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:52.841032 | orchestrator | 2026-03-17 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:55.888045 | orchestrator | 2026-03-17 01:04:55 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:04:55.890806 | orchestrator | 2026-03-17 01:04:55 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:55.892678 | orchestrator | 2026-03-17 01:04:55 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:55.892926 | orchestrator | 2026-03-17 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:04:58.939435 | orchestrator | 2026-03-17 01:04:58 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:04:58.940547 | orchestrator | 2026-03-17 01:04:58 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:04:58.941996 | orchestrator | 2026-03-17 01:04:58 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:04:58.942068 | orchestrator | 2026-03-17 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:01.993973 | orchestrator | 2026-03-17 01:05:01 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:05:01.996129 | orchestrator | 2026-03-17 01:05:01 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:01.998009 | orchestrator | 2026-03-17 01:05:01 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:05:01.998682 | orchestrator | 2026-03-17 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:05.049453 | orchestrator | 2026-03-17 01:05:05 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:05:05.051234 | orchestrator | 2026-03-17 01:05:05 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:05.052872 | orchestrator | 2026-03-17 01:05:05 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:05:05.053906 | orchestrator | 2026-03-17 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:08.121004 | orchestrator | 2026-03-17 01:05:08 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:05:08.121790 | orchestrator | 2026-03-17 01:05:08 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:08.123012 | orchestrator | 2026-03-17 01:05:08 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:05:08.123032 | orchestrator | 2026-03-17 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:11.165388 | orchestrator | 2026-03-17 01:05:11 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state STARTED 2026-03-17 01:05:11.166791 | orchestrator | 2026-03-17 01:05:11 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:11.170167 | orchestrator | 2026-03-17 01:05:11 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:05:11.170217 | orchestrator | 2026-03-17 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:14.213482 | orchestrator | 2026-03-17 01:05:14 | INFO  | Task 72dbf56c-d390-4b13-961f-0bb3cce438a8 is in state SUCCESS 2026-03-17 01:05:14.214932 | orchestrator | 2026-03-17 01:05:14 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:14.216656 | orchestrator | 2026-03-17 01:05:14 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:05:14.216838 | orchestrator | 2026-03-17 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:17.246912 | orchestrator | 2026-03-17 01:05:17 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:17.247939 | orchestrator | 2026-03-17 01:05:17 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:05:17.249141 | orchestrator | 2026-03-17 01:05:17 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:17.249166 | orchestrator | 2026-03-17 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:20.292874 | orchestrator | 2026-03-17 01:05:20 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:20.293666 | orchestrator | 2026-03-17 01:05:20 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:05:20.295438 | orchestrator | 2026-03-17 01:05:20 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:20.295623 | orchestrator | 2026-03-17 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:23.332779 | orchestrator | 2026-03-17 01:05:23 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:23.334131 | orchestrator | 2026-03-17 01:05:23 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state STARTED 2026-03-17 01:05:23.337135 | orchestrator | 2026-03-17 01:05:23 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:23.337172 | orchestrator | 2026-03-17 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:26.394908 | orchestrator | 2026-03-17 01:05:26 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:26.398205 | orchestrator | 2026-03-17 01:05:26 | INFO  | Task 24519b9e-2e46-4427-9755-3c0533cca04c is in state SUCCESS 2026-03-17 01:05:26.399661 | orchestrator | 2026-03-17 01:05:26.399716 | orchestrator | 2026-03-17 01:05:26.399724 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-17 01:05:26.399730 | orchestrator | 2026-03-17 01:05:26.399736 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-17 01:05:26.399741 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:00.199) 0:00:00.199 ********* 2026-03-17 01:05:26.399747 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-17 01:05:26.399753 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.399759 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.399764 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:05:26.399798 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.399804 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-17 01:05:26.399810 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-17 01:05:26.399816 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:05:26.399821 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-17 01:05:26.399827 | orchestrator | 2026-03-17 01:05:26.399833 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-17 01:05:26.399838 | orchestrator | Tuesday 17 March 2026 01:04:45 +0000 (0:00:04.722) 0:00:04.921 ********* 2026-03-17 01:05:26.399844 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-17 01:05:26.399928 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.399934 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.399940 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:05:26.399946 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.399961 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-17 01:05:26.399971 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-17 01:05:26.399977 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:05:26.399983 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-17 01:05:26.399989 | orchestrator | 2026-03-17 01:05:26.399994 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-17 01:05:26.400000 | orchestrator | Tuesday 17 March 2026 01:04:49 +0000 (0:00:04.002) 0:00:08.924 ********* 2026-03-17 01:05:26.400006 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-17 01:05:26.400012 | orchestrator | 2026-03-17 01:05:26.400044 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-17 01:05:26.400051 | orchestrator | Tuesday 17 March 2026 01:04:50 +0000 (0:00:01.062) 0:00:09.987 ********* 2026-03-17 01:05:26.400259 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-17 01:05:26.400272 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.400278 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.400284 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:05:26.400290 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.400295 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-17 01:05:26.400300 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-17 01:05:26.400306 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:05:26.400311 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-17 01:05:26.400316 | orchestrator | 2026-03-17 01:05:26.400330 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-17 01:05:26.400335 | orchestrator | Tuesday 17 March 2026 01:05:04 +0000 (0:00:13.950) 0:00:23.937 ********* 2026-03-17 01:05:26.400341 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-17 01:05:26.400347 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-17 01:05:26.400352 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-17 01:05:26.400358 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-17 01:05:26.400386 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-17 01:05:26.400393 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-17 01:05:26.400399 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-17 01:05:26.400405 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-17 01:05:26.400410 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-17 01:05:26.400423 | orchestrator | 2026-03-17 01:05:26.400429 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-17 01:05:26.400435 | orchestrator | Tuesday 17 March 2026 01:05:07 +0000 (0:00:02.987) 0:00:26.925 ********* 2026-03-17 01:05:26.400440 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-17 01:05:26.400446 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.400452 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.400457 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-17 01:05:26.400463 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-17 01:05:26.400468 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-17 01:05:26.400473 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-17 01:05:26.400479 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-17 01:05:26.400484 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-17 01:05:26.400490 | orchestrator | 2026-03-17 01:05:26.400495 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:05:26.400501 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:05:26.400507 | orchestrator | 2026-03-17 01:05:26.400512 | orchestrator | 2026-03-17 01:05:26.400517 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:05:26.400522 | orchestrator | Tuesday 17 March 2026 01:05:13 +0000 (0:00:06.117) 0:00:33.043 ********* 2026-03-17 01:05:26.400527 | orchestrator | =============================================================================== 2026-03-17 01:05:26.400532 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.95s 2026-03-17 01:05:26.400537 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.12s 2026-03-17 01:05:26.400543 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.72s 2026-03-17 01:05:26.400561 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.00s 2026-03-17 01:05:26.400567 | orchestrator | Check if target directories exist --------------------------------------- 2.99s 2026-03-17 01:05:26.400572 | orchestrator | Create share directory -------------------------------------------------- 1.06s 2026-03-17 01:05:26.400578 | orchestrator | 2026-03-17 01:05:26.400583 | orchestrator | 2026-03-17 01:05:26.400589 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:05:26.400594 | orchestrator | 2026-03-17 01:05:26.400599 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:05:26.400603 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:00.279) 0:00:00.279 ********* 2026-03-17 01:05:26.400608 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.400614 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.400619 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.400625 | orchestrator | 2026-03-17 01:05:26.400631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:05:26.400636 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:00.254) 0:00:00.534 ********* 2026-03-17 01:05:26.400642 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-17 01:05:26.400648 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-17 01:05:26.400653 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-17 01:05:26.400658 | orchestrator | 2026-03-17 01:05:26.400664 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-17 01:05:26.400669 | orchestrator | 2026-03-17 01:05:26.400674 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:05:26.400680 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:00.257) 0:00:00.792 ********* 2026-03-17 01:05:26.400691 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:26.400696 | orchestrator | 2026-03-17 01:05:26.400702 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-17 01:05:26.400711 | orchestrator | Tuesday 17 March 2026 01:03:59 +0000 (0:00:00.517) 0:00:01.309 ********* 2026-03-17 01:05:26.400729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:05:26.400741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:05:26.400757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:05:26.400764 | orchestrator | 2026-03-17 01:05:26.400769 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-17 01:05:26.400774 | orchestrator | Tuesday 17 March 2026 01:04:00 +0000 (0:00:01.360) 0:00:02.670 ********* 2026-03-17 01:05:26.400779 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.400785 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.400789 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.400794 | orchestrator | 2026-03-17 01:05:26.400798 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:05:26.400803 | orchestrator | Tuesday 17 March 2026 01:04:00 +0000 (0:00:00.269) 0:00:02.940 ********* 2026-03-17 01:05:26.400807 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-17 01:05:26.400812 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-17 01:05:26.400816 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-17 01:05:26.400824 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-17 01:05:26.400830 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-17 01:05:26.400834 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-17 01:05:26.400840 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-17 01:05:26.400845 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-17 01:05:26.400851 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-17 01:05:26.400856 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-17 01:05:26.400862 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-17 01:05:26.400867 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-17 01:05:26.400875 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-17 01:05:26.400881 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-17 01:05:26.400889 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-17 01:05:26.400894 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-17 01:05:26.400900 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-17 01:05:26.400905 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-17 01:05:26.400911 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-17 01:05:26.400916 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-17 01:05:26.400926 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-17 01:05:26.400931 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-17 01:05:26.400937 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-17 01:05:26.400942 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-17 01:05:26.400949 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-17 01:05:26.400955 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-17 01:05:26.400961 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-17 01:05:26.400967 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-17 01:05:26.400972 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-17 01:05:26.400978 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-17 01:05:26.400983 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-17 01:05:26.400988 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-17 01:05:26.400993 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-17 01:05:26.401003 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-17 01:05:26.401008 | orchestrator | 2026-03-17 01:05:26.401014 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401019 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:00.738) 0:00:03.679 ********* 2026-03-17 01:05:26.401024 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401030 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401035 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401041 | orchestrator | 2026-03-17 01:05:26.401047 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401052 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:00.261) 0:00:03.940 ********* 2026-03-17 01:05:26.401057 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401063 | orchestrator | 2026-03-17 01:05:26.401069 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401074 | orchestrator | Tuesday 17 March 2026 01:04:02 +0000 (0:00:00.137) 0:00:04.077 ********* 2026-03-17 01:05:26.401080 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401085 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401091 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.401096 | orchestrator | 2026-03-17 01:05:26.401102 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401107 | orchestrator | Tuesday 17 March 2026 01:04:02 +0000 (0:00:00.238) 0:00:04.316 ********* 2026-03-17 01:05:26.401115 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401120 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401125 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401131 | orchestrator | 2026-03-17 01:05:26.401136 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401141 | orchestrator | Tuesday 17 March 2026 01:04:02 +0000 (0:00:00.282) 0:00:04.598 ********* 2026-03-17 01:05:26.401147 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401153 | orchestrator | 2026-03-17 01:05:26.401158 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401163 | orchestrator | Tuesday 17 March 2026 01:04:02 +0000 (0:00:00.101) 0:00:04.699 ********* 2026-03-17 01:05:26.401169 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401174 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401179 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.401185 | orchestrator | 2026-03-17 01:05:26.401194 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401200 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:00.340) 0:00:05.040 ********* 2026-03-17 01:05:26.401206 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401212 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401217 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401222 | orchestrator | 2026-03-17 01:05:26.401227 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401233 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:00.288) 0:00:05.328 ********* 2026-03-17 01:05:26.401238 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401244 | orchestrator | 2026-03-17 01:05:26.401250 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401255 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:00.112) 0:00:05.440 ********* 2026-03-17 01:05:26.401264 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401270 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401276 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.401281 | orchestrator | 2026-03-17 01:05:26.401287 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401292 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:00.325) 0:00:05.766 ********* 2026-03-17 01:05:26.401302 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401308 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401313 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401319 | orchestrator | 2026-03-17 01:05:26.401324 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401330 | orchestrator | Tuesday 17 March 2026 01:04:04 +0000 (0:00:00.274) 0:00:06.041 ********* 2026-03-17 01:05:26.401336 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401341 | orchestrator | 2026-03-17 01:05:26.401347 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401352 | orchestrator | Tuesday 17 March 2026 01:04:04 +0000 (0:00:00.105) 0:00:06.146 ********* 2026-03-17 01:05:26.401358 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401363 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401369 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.401375 | orchestrator | 2026-03-17 01:05:26.401380 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401385 | orchestrator | Tuesday 17 March 2026 01:04:04 +0000 (0:00:00.345) 0:00:06.492 ********* 2026-03-17 01:05:26.401390 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401395 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401401 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401406 | orchestrator | 2026-03-17 01:05:26.401412 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401417 | orchestrator | Tuesday 17 March 2026 01:04:04 +0000 (0:00:00.263) 0:00:06.755 ********* 2026-03-17 01:05:26.401422 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401428 | orchestrator | 2026-03-17 01:05:26.401433 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401438 | orchestrator | Tuesday 17 March 2026 01:04:04 +0000 (0:00:00.109) 0:00:06.865 ********* 2026-03-17 01:05:26.401444 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401449 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401455 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.401460 | orchestrator | 2026-03-17 01:05:26.401465 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401471 | orchestrator | Tuesday 17 March 2026 01:04:05 +0000 (0:00:00.236) 0:00:07.101 ********* 2026-03-17 01:05:26.401476 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401482 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401487 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401492 | orchestrator | 2026-03-17 01:05:26.401498 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401503 | orchestrator | Tuesday 17 March 2026 01:04:05 +0000 (0:00:00.360) 0:00:07.462 ********* 2026-03-17 01:05:26.401508 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401514 | orchestrator | 2026-03-17 01:05:26.401519 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401525 | orchestrator | Tuesday 17 March 2026 01:04:05 +0000 (0:00:00.133) 0:00:07.596 ********* 2026-03-17 01:05:26.401529 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401535 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401540 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.401627 | orchestrator | 2026-03-17 01:05:26.401634 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401640 | orchestrator | Tuesday 17 March 2026 01:04:05 +0000 (0:00:00.289) 0:00:07.886 ********* 2026-03-17 01:05:26.401645 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401650 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401656 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401661 | orchestrator | 2026-03-17 01:05:26.401667 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401672 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:00.262) 0:00:08.148 ********* 2026-03-17 01:05:26.401677 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401689 | orchestrator | 2026-03-17 01:05:26.401695 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401700 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:00.113) 0:00:08.261 ********* 2026-03-17 01:05:26.401706 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401711 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401716 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.401722 | orchestrator | 2026-03-17 01:05:26.401727 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401733 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:00.252) 0:00:08.514 ********* 2026-03-17 01:05:26.401738 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401744 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401750 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401756 | orchestrator | 2026-03-17 01:05:26.401761 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401774 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:00.440) 0:00:08.954 ********* 2026-03-17 01:05:26.401780 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401786 | orchestrator | 2026-03-17 01:05:26.401791 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401797 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.107) 0:00:09.061 ********* 2026-03-17 01:05:26.401802 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401808 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401814 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.401819 | orchestrator | 2026-03-17 01:05:26.401824 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401830 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.243) 0:00:09.304 ********* 2026-03-17 01:05:26.401835 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401841 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401846 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401851 | orchestrator | 2026-03-17 01:05:26.401863 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401868 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.260) 0:00:09.564 ********* 2026-03-17 01:05:26.401874 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401879 | orchestrator | 2026-03-17 01:05:26.401885 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401890 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.109) 0:00:09.674 ********* 2026-03-17 01:05:26.401896 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401901 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401906 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.401912 | orchestrator | 2026-03-17 01:05:26.401918 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-17 01:05:26.401923 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.260) 0:00:09.935 ********* 2026-03-17 01:05:26.401929 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:26.401934 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:26.401940 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:26.401945 | orchestrator | 2026-03-17 01:05:26.401950 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-17 01:05:26.401956 | orchestrator | Tuesday 17 March 2026 01:04:08 +0000 (0:00:00.434) 0:00:10.369 ********* 2026-03-17 01:05:26.401962 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401967 | orchestrator | 2026-03-17 01:05:26.401972 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-17 01:05:26.401978 | orchestrator | Tuesday 17 March 2026 01:04:08 +0000 (0:00:00.111) 0:00:10.481 ********* 2026-03-17 01:05:26.401983 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.401989 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.401995 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.402005 | orchestrator | 2026-03-17 01:05:26.402011 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-17 01:05:26.402060 | orchestrator | Tuesday 17 March 2026 01:04:08 +0000 (0:00:00.251) 0:00:10.732 ********* 2026-03-17 01:05:26.402067 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:26.402073 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:26.402079 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:26.402085 | orchestrator | 2026-03-17 01:05:26.402090 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-17 01:05:26.402095 | orchestrator | Tuesday 17 March 2026 01:04:10 +0000 (0:00:01.658) 0:00:12.391 ********* 2026-03-17 01:05:26.402101 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-17 01:05:26.402107 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-17 01:05:26.402112 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-17 01:05:26.402118 | orchestrator | 2026-03-17 01:05:26.402123 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-17 01:05:26.402129 | orchestrator | Tuesday 17 March 2026 01:04:12 +0000 (0:00:01.829) 0:00:14.221 ********* 2026-03-17 01:05:26.402134 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-17 01:05:26.402140 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-17 01:05:26.402146 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-17 01:05:26.402151 | orchestrator | 2026-03-17 01:05:26.402157 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-17 01:05:26.402163 | orchestrator | Tuesday 17 March 2026 01:04:14 +0000 (0:00:01.890) 0:00:16.111 ********* 2026-03-17 01:05:26.402169 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-17 01:05:26.402174 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-17 01:05:26.402179 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-17 01:05:26.402184 | orchestrator | 2026-03-17 01:05:26.402189 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-17 01:05:26.402195 | orchestrator | Tuesday 17 March 2026 01:04:15 +0000 (0:00:01.502) 0:00:17.613 ********* 2026-03-17 01:05:26.402200 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.402206 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.402211 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.402217 | orchestrator | 2026-03-17 01:05:26.402222 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-17 01:05:26.402228 | orchestrator | Tuesday 17 March 2026 01:04:15 +0000 (0:00:00.249) 0:00:17.863 ********* 2026-03-17 01:05:26.402234 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.402239 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.402245 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.402251 | orchestrator | 2026-03-17 01:05:26.402261 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:05:26.402267 | orchestrator | Tuesday 17 March 2026 01:04:16 +0000 (0:00:00.255) 0:00:18.118 ********* 2026-03-17 01:05:26.402272 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:26.402278 | orchestrator | 2026-03-17 01:05:26.402283 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-17 01:05:26.402290 | orchestrator | Tuesday 17 March 2026 01:04:16 +0000 (0:00:00.650) 0:00:18.769 ********* 2026-03-17 01:05:26.402308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:05:26.402329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:05:26.402340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:05:26.402345 | orchestrator | 2026-03-17 01:05:26.402350 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-17 01:05:26.402355 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:01.311) 0:00:20.081 ********* 2026-03-17 01:05:26.402366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:05:26.402375 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.402380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:05:26.402386 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.402397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:05:26.402405 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.402411 | orchestrator | 2026-03-17 01:05:26.402416 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-17 01:05:26.402421 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:00.715) 0:00:20.796 ********* 2026-03-17 01:05:26.402427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:05:26.402432 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.402444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:05:26.402453 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.402462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:05:26.402472 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.402477 | orchestrator | 2026-03-17 01:05:26.402483 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-03-17 01:05:26.402488 | orchestrator | Tuesday 17 March 2026 01:04:19 +0000 (0:00:00.945) 0:00:21.741 ********* 2026-03-17 01:05:26.402498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:05:26.402512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:05:26.402524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-17 01:05:26.402530 | orchestrator | 2026-03-17 01:05:26.402535 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-03-17 01:05:26.402540 | orchestrator | Tuesday 17 March 2026 01:04:21 +0000 (0:00:01.280) 0:00:23.022 ********* 2026-03-17 01:05:26.402561 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:05:26.402567 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:05:26.402572 | orchestrator | } 2026-03-17 01:05:26.402577 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:05:26.402583 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:05:26.402587 | orchestrator | } 2026-03-17 01:05:26.402592 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:05:26.402597 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:05:26.402606 | orchestrator | } 2026-03-17 01:05:26.402611 | orchestrator | 2026-03-17 01:05:26.402616 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:05:26.402621 | orchestrator | Tuesday 17 March 2026 01:04:21 +0000 (0:00:00.296) 0:00:23.319 ********* 2026-03-17 01:05:26.402635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:05:26.402641 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.402648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:05:26.402656 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.402666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-17 01:05:26.402672 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.402677 | orchestrator | 2026-03-17 01:05:26.402682 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:05:26.402687 | orchestrator | Tuesday 17 March 2026 01:04:22 +0000 (0:00:01.034) 0:00:24.353 ********* 2026-03-17 01:05:26.402692 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:26.402697 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:26.402702 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:26.402712 | orchestrator | 2026-03-17 01:05:26.402718 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-17 01:05:26.402723 | orchestrator | Tuesday 17 March 2026 01:04:22 +0000 (0:00:00.271) 0:00:24.625 ********* 2026-03-17 01:05:26.402728 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:26.402733 | orchestrator | 2026-03-17 01:05:26.402738 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-17 01:05:26.402747 | orchestrator | Tuesday 17 March 2026 01:04:23 +0000 (0:00:00.670) 0:00:25.296 ********* 2026-03-17 01:05:26.402753 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:26.402757 | orchestrator | 2026-03-17 01:05:26.402762 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-17 01:05:26.402768 | orchestrator | Tuesday 17 March 2026 01:04:25 +0000 (0:00:02.242) 0:00:27.538 ********* 2026-03-17 01:05:26.402773 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:26.402778 | orchestrator | 2026-03-17 01:05:26.402783 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-17 01:05:26.402788 | orchestrator | Tuesday 17 March 2026 01:04:27 +0000 (0:00:02.014) 0:00:29.552 ********* 2026-03-17 01:05:26.402794 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:26.402799 | orchestrator | 2026-03-17 01:05:26.402805 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-17 01:05:26.402811 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:14.241) 0:00:43.794 ********* 2026-03-17 01:05:26.402816 | orchestrator | 2026-03-17 01:05:26.402821 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-17 01:05:26.402826 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:00.058) 0:00:43.853 ********* 2026-03-17 01:05:26.402831 | orchestrator | 2026-03-17 01:05:26.402836 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-17 01:05:26.402841 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:00.058) 0:00:43.912 ********* 2026-03-17 01:05:26.402846 | orchestrator | 2026-03-17 01:05:26.402851 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-17 01:05:26.402860 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:00.061) 0:00:43.973 ********* 2026-03-17 01:05:26.402865 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:26.402871 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:26.402876 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:26.402881 | orchestrator | 2026-03-17 01:05:26.402887 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:05:26.402893 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-03-17 01:05:26.402902 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-03-17 01:05:26.402911 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-03-17 01:05:26.402917 | orchestrator | 2026-03-17 01:05:26.402922 | orchestrator | 2026-03-17 01:05:26.402928 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:05:26.402934 | orchestrator | Tuesday 17 March 2026 01:05:24 +0000 (0:00:42.826) 0:01:26.799 ********* 2026-03-17 01:05:26.402939 | orchestrator | =============================================================================== 2026-03-17 01:05:26.402944 | orchestrator | horizon : Restart horizon container ------------------------------------ 42.83s 2026-03-17 01:05:26.402950 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.24s 2026-03-17 01:05:26.402955 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.24s 2026-03-17 01:05:26.402959 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.01s 2026-03-17 01:05:26.402965 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.89s 2026-03-17 01:05:26.402970 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.83s 2026-03-17 01:05:26.402975 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.66s 2026-03-17 01:05:26.402981 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.50s 2026-03-17 01:05:26.402986 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.36s 2026-03-17 01:05:26.402996 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.31s 2026-03-17 01:05:26.403002 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.28s 2026-03-17 01:05:26.403007 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.03s 2026-03-17 01:05:26.403012 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.95s 2026-03-17 01:05:26.403018 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2026-03-17 01:05:26.403024 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.72s 2026-03-17 01:05:26.403029 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2026-03-17 01:05:26.403034 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-03-17 01:05:26.403039 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-03-17 01:05:26.403044 | orchestrator | horizon : Update policy file name --------------------------------------- 0.44s 2026-03-17 01:05:26.403049 | orchestrator | horizon : Update policy file name --------------------------------------- 0.43s 2026-03-17 01:05:26.403054 | orchestrator | 2026-03-17 01:05:26 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:26.403060 | orchestrator | 2026-03-17 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:29.440584 | orchestrator | 2026-03-17 01:05:29 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:29.441748 | orchestrator | 2026-03-17 01:05:29 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:29.442244 | orchestrator | 2026-03-17 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:32.485252 | orchestrator | 2026-03-17 01:05:32 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:32.486767 | orchestrator | 2026-03-17 01:05:32 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:32.486797 | orchestrator | 2026-03-17 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:35.521408 | orchestrator | 2026-03-17 01:05:35 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:35.522995 | orchestrator | 2026-03-17 01:05:35 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:35.523125 | orchestrator | 2026-03-17 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:38.558888 | orchestrator | 2026-03-17 01:05:38 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:38.562058 | orchestrator | 2026-03-17 01:05:38 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:38.562130 | orchestrator | 2026-03-17 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:41.599921 | orchestrator | 2026-03-17 01:05:41 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:41.601962 | orchestrator | 2026-03-17 01:05:41 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:41.602071 | orchestrator | 2026-03-17 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:44.652263 | orchestrator | 2026-03-17 01:05:44 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:44.654127 | orchestrator | 2026-03-17 01:05:44 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:44.654163 | orchestrator | 2026-03-17 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:47.694201 | orchestrator | 2026-03-17 01:05:47 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:47.695485 | orchestrator | 2026-03-17 01:05:47 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:47.695508 | orchestrator | 2026-03-17 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:50.737448 | orchestrator | 2026-03-17 01:05:50 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:50.739468 | orchestrator | 2026-03-17 01:05:50 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:50.739596 | orchestrator | 2026-03-17 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:53.781412 | orchestrator | 2026-03-17 01:05:53 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:53.782507 | orchestrator | 2026-03-17 01:05:53 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:53.782696 | orchestrator | 2026-03-17 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:56.817356 | orchestrator | 2026-03-17 01:05:56 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state STARTED 2026-03-17 01:05:56.818671 | orchestrator | 2026-03-17 01:05:56 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:56.818700 | orchestrator | 2026-03-17 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:05:59.861832 | orchestrator | 2026-03-17 01:05:59 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:05:59.861895 | orchestrator | 2026-03-17 01:05:59 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:05:59.865085 | orchestrator | 2026-03-17 01:05:59 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:05:59.866274 | orchestrator | 2026-03-17 01:05:59 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:05:59.877542 | orchestrator | 2026-03-17 01:05:59.877595 | orchestrator | 2026-03-17 01:05:59.877600 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:05:59.877604 | orchestrator | 2026-03-17 01:05:59.877607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:05:59.877611 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:00.277) 0:00:00.277 ********* 2026-03-17 01:05:59.877614 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:59.877618 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:59.877622 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:59.877625 | orchestrator | 2026-03-17 01:05:59.877628 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:05:59.877631 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:00.250) 0:00:00.528 ********* 2026-03-17 01:05:59.877635 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-17 01:05:59.877638 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-17 01:05:59.877641 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-17 01:05:59.877644 | orchestrator | 2026-03-17 01:05:59.877647 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-17 01:05:59.877651 | orchestrator | 2026-03-17 01:05:59.877654 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:05:59.877657 | orchestrator | Tuesday 17 March 2026 01:03:58 +0000 (0:00:00.253) 0:00:00.781 ********* 2026-03-17 01:05:59.877660 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:59.877664 | orchestrator | 2026-03-17 01:05:59.877667 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-17 01:05:59.877670 | orchestrator | Tuesday 17 March 2026 01:03:59 +0000 (0:00:00.544) 0:00:01.325 ********* 2026-03-17 01:05:59.877691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.877697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.877709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.877713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877740 | orchestrator | 2026-03-17 01:05:59.877743 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-17 01:05:59.877749 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:02.026) 0:00:03.352 ********* 2026-03-17 01:05:59.877752 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.877756 | orchestrator | 2026-03-17 01:05:59.877759 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-17 01:05:59.877762 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:00.134) 0:00:03.487 ********* 2026-03-17 01:05:59.877765 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.877768 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.877773 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.877778 | orchestrator | 2026-03-17 01:05:59.877782 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-17 01:05:59.877786 | orchestrator | Tuesday 17 March 2026 01:04:01 +0000 (0:00:00.232) 0:00:03.720 ********* 2026-03-17 01:05:59.877799 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:05:59.877805 | orchestrator | 2026-03-17 01:05:59.877810 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:05:59.877814 | orchestrator | Tuesday 17 March 2026 01:04:02 +0000 (0:00:00.847) 0:00:04.567 ********* 2026-03-17 01:05:59.877819 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:59.877878 | orchestrator | 2026-03-17 01:05:59.877885 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-17 01:05:59.877890 | orchestrator | Tuesday 17 March 2026 01:04:03 +0000 (0:00:00.576) 0:00:05.144 ********* 2026-03-17 01:05:59.877899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.877906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.877916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.877922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.877961 | orchestrator | 2026-03-17 01:05:59.877966 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-17 01:05:59.877971 | orchestrator | Tuesday 17 March 2026 01:04:06 +0000 (0:00:03.329) 0:00:08.473 ********* 2026-03-17 01:05:59.877980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.877990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.877996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.878188 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.878199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.878204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.878294 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.878325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.878334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.878346 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.878351 | orchestrator | 2026-03-17 01:05:59.878357 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-17 01:05:59.878362 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.504) 0:00:08.978 ********* 2026-03-17 01:05:59.878368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.878383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.878394 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.878402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.878408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.878425 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.878435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.878441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.878455 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.878460 | orchestrator | 2026-03-17 01:05:59.878466 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-17 01:05:59.878471 | orchestrator | Tuesday 17 March 2026 01:04:07 +0000 (0:00:00.781) 0:00:09.760 ********* 2026-03-17 01:05:59.878477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.878483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.878494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.878500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.878537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.878544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.878550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.878559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.878568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.878573 | orchestrator | 2026-03-17 01:05:59.878578 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-17 01:05:59.878584 | orchestrator | Tuesday 17 March 2026 01:04:11 +0000 (0:00:03.418) 0:00:13.178 ********* 2026-03-17 01:05:59.878592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.878598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.878613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.878630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.878641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.878650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.878655 | orchestrator | 2026-03-17 01:05:59.878660 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-17 01:05:59.878666 | orchestrator | Tuesday 17 March 2026 01:04:15 +0000 (0:00:04.480) 0:00:17.658 ********* 2026-03-17 01:05:59.878671 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:59.878676 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:59.878681 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:59.878686 | orchestrator | 2026-03-17 01:05:59.878692 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-17 01:05:59.878697 | orchestrator | Tuesday 17 March 2026 01:04:16 +0000 (0:00:01.219) 0:00:18.878 ********* 2026-03-17 01:05:59.878702 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.878707 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.878712 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.878717 | orchestrator | 2026-03-17 01:05:59.878722 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-17 01:05:59.878730 | orchestrator | Tuesday 17 March 2026 01:04:17 +0000 (0:00:00.839) 0:00:19.717 ********* 2026-03-17 01:05:59.878735 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.878740 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.878745 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.878750 | orchestrator | 2026-03-17 01:05:59.878756 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-17 01:05:59.878761 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:00.270) 0:00:19.988 ********* 2026-03-17 01:05:59.878766 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.878771 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.878776 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.878781 | orchestrator | 2026-03-17 01:05:59.878786 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-17 01:05:59.878792 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:00.244) 0:00:20.232 ********* 2026-03-17 01:05:59.878799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.878805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.878819 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.878825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.878833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.878844 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.878852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.878861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.878866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.878872 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.878877 | orchestrator | 2026-03-17 01:05:59.878882 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:05:59.878887 | orchestrator | Tuesday 17 March 2026 01:04:18 +0000 (0:00:00.537) 0:00:20.770 ********* 2026-03-17 01:05:59.878893 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.878898 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.878903 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.878908 | orchestrator | 2026-03-17 01:05:59.878913 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-17 01:05:59.878918 | orchestrator | Tuesday 17 March 2026 01:04:19 +0000 (0:00:00.383) 0:00:21.153 ********* 2026-03-17 01:05:59.878924 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-17 01:05:59.878931 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-17 01:05:59.878937 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-17 01:05:59.878942 | orchestrator | 2026-03-17 01:05:59.878948 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-17 01:05:59.878953 | orchestrator | Tuesday 17 March 2026 01:04:20 +0000 (0:00:01.398) 0:00:22.552 ********* 2026-03-17 01:05:59.878959 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:05:59.878964 | orchestrator | 2026-03-17 01:05:59.878969 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-17 01:05:59.878975 | orchestrator | Tuesday 17 March 2026 01:04:21 +0000 (0:00:00.855) 0:00:23.407 ********* 2026-03-17 01:05:59.878980 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.878986 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.878991 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.878996 | orchestrator | 2026-03-17 01:05:59.879002 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-17 01:05:59.879011 | orchestrator | Tuesday 17 March 2026 01:04:22 +0000 (0:00:00.636) 0:00:24.043 ********* 2026-03-17 01:05:59.879016 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 01:05:59.879022 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 01:05:59.879027 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:05:59.879032 | orchestrator | 2026-03-17 01:05:59.879037 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-17 01:05:59.879043 | orchestrator | Tuesday 17 March 2026 01:04:23 +0000 (0:00:01.250) 0:00:25.294 ********* 2026-03-17 01:05:59.879048 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:59.879054 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:59.879059 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:59.879064 | orchestrator | 2026-03-17 01:05:59.879070 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-17 01:05:59.879075 | orchestrator | Tuesday 17 March 2026 01:04:23 +0000 (0:00:00.466) 0:00:25.760 ********* 2026-03-17 01:05:59.879080 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-17 01:05:59.879086 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-17 01:05:59.879093 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-17 01:05:59.879098 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-17 01:05:59.879104 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-17 01:05:59.879110 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-17 01:05:59.879115 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-17 01:05:59.879121 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-17 01:05:59.879126 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-17 01:05:59.879132 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-17 01:05:59.879137 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-17 01:05:59.879142 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-17 01:05:59.879148 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-17 01:05:59.879153 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-17 01:05:59.879158 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-17 01:05:59.879163 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:05:59.879168 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:05:59.879174 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:05:59.879179 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:05:59.879184 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:05:59.879190 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:05:59.879195 | orchestrator | 2026-03-17 01:05:59.879201 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-17 01:05:59.879206 | orchestrator | Tuesday 17 March 2026 01:04:31 +0000 (0:00:07.698) 0:00:33.459 ********* 2026-03-17 01:05:59.879211 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:05:59.879222 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:05:59.879227 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:05:59.879233 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:05:59.879241 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:05:59.879246 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:05:59.879252 | orchestrator | 2026-03-17 01:05:59.879257 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-03-17 01:05:59.879262 | orchestrator | Tuesday 17 March 2026 01:04:33 +0000 (0:00:02.305) 0:00:35.764 ********* 2026-03-17 01:05:59.879268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.879277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.879283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-17 01:05:59.879296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.879302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.879307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-17 01:05:59.879315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.879321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.879326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-17 01:05:59.879332 | orchestrator | 2026-03-17 01:05:59.879337 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-03-17 01:05:59.879342 | orchestrator | Tuesday 17 March 2026 01:04:35 +0000 (0:00:02.059) 0:00:37.824 ********* 2026-03-17 01:05:59.879351 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:05:59.879357 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:05:59.879363 | orchestrator | } 2026-03-17 01:05:59.879368 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:05:59.879374 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:05:59.879377 | orchestrator | } 2026-03-17 01:05:59.879380 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:05:59.879383 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:05:59.879386 | orchestrator | } 2026-03-17 01:05:59.879390 | orchestrator | 2026-03-17 01:05:59.879393 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:05:59.879396 | orchestrator | Tuesday 17 March 2026 01:04:36 +0000 (0:00:00.369) 0:00:38.193 ********* 2026-03-17 01:05:59.879401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/ke2026-03-17 01:05:59 | INFO  | Task 382166e0-0bd8-42d1-b461-7cdcebc8e414 is in state SUCCESS 2026-03-17 01:05:59.879407 | orchestrator | ystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.879410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.879415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.879419 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.879422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.879428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.879434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.879438 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.879441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-17 01:05:59.879446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-17 01:05:59.879450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-17 01:05:59.879455 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.879458 | orchestrator | 2026-03-17 01:05:59.879461 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:05:59.879464 | orchestrator | Tuesday 17 March 2026 01:04:36 +0000 (0:00:00.667) 0:00:38.861 ********* 2026-03-17 01:05:59.879468 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.879471 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.879474 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.879477 | orchestrator | 2026-03-17 01:05:59.879480 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-17 01:05:59.879483 | orchestrator | Tuesday 17 March 2026 01:04:37 +0000 (0:00:00.254) 0:00:39.115 ********* 2026-03-17 01:05:59.879486 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:59.879489 | orchestrator | 2026-03-17 01:05:59.879493 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-17 01:05:59.879496 | orchestrator | Tuesday 17 March 2026 01:04:39 +0000 (0:00:02.020) 0:00:41.136 ********* 2026-03-17 01:05:59.879499 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:59.879502 | orchestrator | 2026-03-17 01:05:59.879505 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-17 01:05:59.879527 | orchestrator | Tuesday 17 March 2026 01:04:41 +0000 (0:00:02.143) 0:00:43.280 ********* 2026-03-17 01:05:59.879532 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:59.879537 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:59.879542 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:59.879547 | orchestrator | 2026-03-17 01:05:59.879553 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-17 01:05:59.879558 | orchestrator | Tuesday 17 March 2026 01:04:42 +0000 (0:00:01.004) 0:00:44.285 ********* 2026-03-17 01:05:59.879563 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:59.879569 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:59.879573 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:59.879576 | orchestrator | 2026-03-17 01:05:59.879579 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-17 01:05:59.879583 | orchestrator | Tuesday 17 March 2026 01:04:42 +0000 (0:00:00.402) 0:00:44.687 ********* 2026-03-17 01:05:59.879586 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.879589 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.879592 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.879595 | orchestrator | 2026-03-17 01:05:59.879600 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-17 01:05:59.879604 | orchestrator | Tuesday 17 March 2026 01:04:43 +0000 (0:00:00.408) 0:00:45.096 ********* 2026-03-17 01:05:59.879607 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:59.879610 | orchestrator | 2026-03-17 01:05:59.879613 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-17 01:05:59.879616 | orchestrator | Tuesday 17 March 2026 01:04:56 +0000 (0:00:13.021) 0:00:58.117 ********* 2026-03-17 01:05:59.879619 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:59.879622 | orchestrator | 2026-03-17 01:05:59.879626 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-17 01:05:59.879629 | orchestrator | Tuesday 17 March 2026 01:05:06 +0000 (0:00:10.310) 0:01:08.428 ********* 2026-03-17 01:05:59.879632 | orchestrator | 2026-03-17 01:05:59.879635 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-17 01:05:59.879638 | orchestrator | Tuesday 17 March 2026 01:05:06 +0000 (0:00:00.069) 0:01:08.497 ********* 2026-03-17 01:05:59.879641 | orchestrator | 2026-03-17 01:05:59.879646 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-17 01:05:59.879652 | orchestrator | Tuesday 17 March 2026 01:05:06 +0000 (0:00:00.063) 0:01:08.561 ********* 2026-03-17 01:05:59.879657 | orchestrator | 2026-03-17 01:05:59.879662 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-17 01:05:59.879672 | orchestrator | Tuesday 17 March 2026 01:05:06 +0000 (0:00:00.164) 0:01:08.725 ********* 2026-03-17 01:05:59.879677 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:59.879683 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:59.879688 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:59.879693 | orchestrator | 2026-03-17 01:05:59.879699 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-17 01:05:59.879704 | orchestrator | Tuesday 17 March 2026 01:05:15 +0000 (0:00:08.439) 0:01:17.165 ********* 2026-03-17 01:05:59.879709 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:59.879715 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:59.879720 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:59.879726 | orchestrator | 2026-03-17 01:05:59.879731 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-17 01:05:59.879736 | orchestrator | Tuesday 17 March 2026 01:05:24 +0000 (0:00:09.368) 0:01:26.533 ********* 2026-03-17 01:05:59.879744 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:59.879749 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:05:59.879755 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:05:59.879760 | orchestrator | 2026-03-17 01:05:59.879765 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:05:59.879770 | orchestrator | Tuesday 17 March 2026 01:05:30 +0000 (0:00:05.577) 0:01:32.111 ********* 2026-03-17 01:05:59.879776 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:05:59.879781 | orchestrator | 2026-03-17 01:05:59.879786 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-17 01:05:59.879792 | orchestrator | Tuesday 17 March 2026 01:05:30 +0000 (0:00:00.651) 0:01:32.762 ********* 2026-03-17 01:05:59.879798 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:05:59.879803 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:59.879808 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:05:59.879814 | orchestrator | 2026-03-17 01:05:59.879819 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-17 01:05:59.879824 | orchestrator | Tuesday 17 March 2026 01:05:31 +0000 (0:00:00.697) 0:01:33.459 ********* 2026-03-17 01:05:59.879830 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:05:59.879835 | orchestrator | 2026-03-17 01:05:59.879840 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-17 01:05:59.879845 | orchestrator | Tuesday 17 March 2026 01:05:32 +0000 (0:00:01.406) 0:01:34.865 ********* 2026-03-17 01:05:59.879851 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-17 01:05:59.879856 | orchestrator | 2026-03-17 01:05:59.879861 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-03-17 01:05:59.879866 | orchestrator | Tuesday 17 March 2026 01:05:45 +0000 (0:00:12.105) 0:01:46.971 ********* 2026-03-17 01:05:59.879872 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-17 01:05:59.879877 | orchestrator | 2026-03-17 01:05:59.879882 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-03-17 01:05:59.879887 | orchestrator | Tuesday 17 March 2026 01:05:48 +0000 (0:00:03.377) 0:01:50.348 ********* 2026-03-17 01:05:59.879893 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-17 01:05:59.879898 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-17 01:05:59.879903 | orchestrator | 2026-03-17 01:05:59.879908 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-17 01:05:59.879914 | orchestrator | Tuesday 17 March 2026 01:05:54 +0000 (0:00:05.930) 0:01:56.279 ********* 2026-03-17 01:05:59.879919 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.879924 | orchestrator | 2026-03-17 01:05:59.879931 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-17 01:05:59.879936 | orchestrator | Tuesday 17 March 2026 01:05:54 +0000 (0:00:00.104) 0:01:56.383 ********* 2026-03-17 01:05:59.879945 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.879951 | orchestrator | 2026-03-17 01:05:59.879956 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-17 01:05:59.879961 | orchestrator | Tuesday 17 March 2026 01:05:54 +0000 (0:00:00.089) 0:01:56.472 ********* 2026-03-17 01:05:59.879966 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.879972 | orchestrator | 2026-03-17 01:05:59.879978 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-03-17 01:05:59.879983 | orchestrator | Tuesday 17 March 2026 01:05:54 +0000 (0:00:00.207) 0:01:56.680 ********* 2026-03-17 01:05:59.879988 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.879993 | orchestrator | 2026-03-17 01:05:59.880002 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-17 01:05:59.880008 | orchestrator | Tuesday 17 March 2026 01:05:55 +0000 (0:00:00.283) 0:01:56.964 ********* 2026-03-17 01:05:59.880013 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:05:59.880018 | orchestrator | 2026-03-17 01:05:59.880023 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-17 01:05:59.880028 | orchestrator | Tuesday 17 March 2026 01:05:57 +0000 (0:00:02.781) 0:01:59.745 ********* 2026-03-17 01:05:59.880033 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:05:59.880038 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:05:59.880043 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:05:59.880048 | orchestrator | 2026-03-17 01:05:59.880053 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:05:59.880059 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-17 01:05:59.880065 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-17 01:05:59.880070 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-17 01:05:59.880075 | orchestrator | 2026-03-17 01:05:59.880080 | orchestrator | 2026-03-17 01:05:59.880086 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:05:59.880091 | orchestrator | Tuesday 17 March 2026 01:05:58 +0000 (0:00:00.436) 0:02:00.182 ********* 2026-03-17 01:05:59.880096 | orchestrator | =============================================================================== 2026-03-17 01:05:59.880102 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.02s 2026-03-17 01:05:59.880108 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.11s 2026-03-17 01:05:59.880113 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.31s 2026-03-17 01:05:59.880118 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.37s 2026-03-17 01:05:59.880126 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.44s 2026-03-17 01:05:59.880131 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 7.70s 2026-03-17 01:05:59.880136 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 5.93s 2026-03-17 01:05:59.880142 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.58s 2026-03-17 01:05:59.880147 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.48s 2026-03-17 01:05:59.880152 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.42s 2026-03-17 01:05:59.880158 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 3.38s 2026-03-17 01:05:59.880163 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.33s 2026-03-17 01:05:59.880168 | orchestrator | keystone : Creating default user role ----------------------------------- 2.78s 2026-03-17 01:05:59.880173 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.31s 2026-03-17 01:05:59.880182 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.14s 2026-03-17 01:05:59.880188 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.06s 2026-03-17 01:05:59.880193 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.03s 2026-03-17 01:05:59.880198 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.02s 2026-03-17 01:05:59.880203 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.41s 2026-03-17 01:05:59.880210 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.40s 2026-03-17 01:05:59.886131 | orchestrator | 2026-03-17 01:05:59 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:05:59.886187 | orchestrator | 2026-03-17 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:02.938670 | orchestrator | 2026-03-17 01:06:02 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:02.938726 | orchestrator | 2026-03-17 01:06:02 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:02.938732 | orchestrator | 2026-03-17 01:06:02 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:02.938737 | orchestrator | 2026-03-17 01:06:02 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:02.938743 | orchestrator | 2026-03-17 01:06:02 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:06:02.938748 | orchestrator | 2026-03-17 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:05.958941 | orchestrator | 2026-03-17 01:06:05 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:05.963042 | orchestrator | 2026-03-17 01:06:05 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:05.963992 | orchestrator | 2026-03-17 01:06:05 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:05.965773 | orchestrator | 2026-03-17 01:06:05 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:05.967165 | orchestrator | 2026-03-17 01:06:05 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state STARTED 2026-03-17 01:06:05.967215 | orchestrator | 2026-03-17 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:09.023441 | orchestrator | 2026-03-17 01:06:09 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:09.025417 | orchestrator | 2026-03-17 01:06:09 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:09.026669 | orchestrator | 2026-03-17 01:06:09 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:09.028543 | orchestrator | 2026-03-17 01:06:09 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:09.032818 | orchestrator | 2026-03-17 01:06:09 | INFO  | Task 172a41d7-d329-4cf3-ad72-7a4e7d4dc162 is in state SUCCESS 2026-03-17 01:06:09.032908 | orchestrator | 2026-03-17 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:12.077660 | orchestrator | 2026-03-17 01:06:12 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:12.078657 | orchestrator | 2026-03-17 01:06:12 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:12.079916 | orchestrator | 2026-03-17 01:06:12 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:12.081573 | orchestrator | 2026-03-17 01:06:12 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:12.082892 | orchestrator | 2026-03-17 01:06:12 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:12.082939 | orchestrator | 2026-03-17 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:15.123299 | orchestrator | 2026-03-17 01:06:15 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:15.124593 | orchestrator | 2026-03-17 01:06:15 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:15.125928 | orchestrator | 2026-03-17 01:06:15 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:15.127803 | orchestrator | 2026-03-17 01:06:15 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:15.129295 | orchestrator | 2026-03-17 01:06:15 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:15.129337 | orchestrator | 2026-03-17 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:18.168592 | orchestrator | 2026-03-17 01:06:18 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:18.168653 | orchestrator | 2026-03-17 01:06:18 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:18.169264 | orchestrator | 2026-03-17 01:06:18 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:18.170378 | orchestrator | 2026-03-17 01:06:18 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:18.171441 | orchestrator | 2026-03-17 01:06:18 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:18.171471 | orchestrator | 2026-03-17 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:21.209038 | orchestrator | 2026-03-17 01:06:21 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:21.211306 | orchestrator | 2026-03-17 01:06:21 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:21.215563 | orchestrator | 2026-03-17 01:06:21 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:21.218785 | orchestrator | 2026-03-17 01:06:21 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:21.221124 | orchestrator | 2026-03-17 01:06:21 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:21.221176 | orchestrator | 2026-03-17 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:24.259537 | orchestrator | 2026-03-17 01:06:24 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:24.261242 | orchestrator | 2026-03-17 01:06:24 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:24.262307 | orchestrator | 2026-03-17 01:06:24 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:24.263776 | orchestrator | 2026-03-17 01:06:24 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:24.264863 | orchestrator | 2026-03-17 01:06:24 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:24.264903 | orchestrator | 2026-03-17 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:27.307556 | orchestrator | 2026-03-17 01:06:27 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:27.309504 | orchestrator | 2026-03-17 01:06:27 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:27.311042 | orchestrator | 2026-03-17 01:06:27 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:27.312845 | orchestrator | 2026-03-17 01:06:27 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:27.314648 | orchestrator | 2026-03-17 01:06:27 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:27.314695 | orchestrator | 2026-03-17 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:30.356040 | orchestrator | 2026-03-17 01:06:30 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:30.357739 | orchestrator | 2026-03-17 01:06:30 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:30.359556 | orchestrator | 2026-03-17 01:06:30 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:30.360810 | orchestrator | 2026-03-17 01:06:30 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:30.362110 | orchestrator | 2026-03-17 01:06:30 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:30.362160 | orchestrator | 2026-03-17 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:33.401201 | orchestrator | 2026-03-17 01:06:33 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:33.401826 | orchestrator | 2026-03-17 01:06:33 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:33.403926 | orchestrator | 2026-03-17 01:06:33 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:33.405016 | orchestrator | 2026-03-17 01:06:33 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:33.406145 | orchestrator | 2026-03-17 01:06:33 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:33.406168 | orchestrator | 2026-03-17 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:36.458393 | orchestrator | 2026-03-17 01:06:36 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:36.458441 | orchestrator | 2026-03-17 01:06:36 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:36.458445 | orchestrator | 2026-03-17 01:06:36 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:36.458449 | orchestrator | 2026-03-17 01:06:36 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:36.458452 | orchestrator | 2026-03-17 01:06:36 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:36.458456 | orchestrator | 2026-03-17 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:39.496492 | orchestrator | 2026-03-17 01:06:39 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:39.498474 | orchestrator | 2026-03-17 01:06:39 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:39.499025 | orchestrator | 2026-03-17 01:06:39 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:39.500045 | orchestrator | 2026-03-17 01:06:39 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:39.500919 | orchestrator | 2026-03-17 01:06:39 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:39.500941 | orchestrator | 2026-03-17 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:42.537146 | orchestrator | 2026-03-17 01:06:42 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:42.537594 | orchestrator | 2026-03-17 01:06:42 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:42.538128 | orchestrator | 2026-03-17 01:06:42 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:42.538860 | orchestrator | 2026-03-17 01:06:42 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:42.539526 | orchestrator | 2026-03-17 01:06:42 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:42.539805 | orchestrator | 2026-03-17 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:45.566146 | orchestrator | 2026-03-17 01:06:45 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:45.567282 | orchestrator | 2026-03-17 01:06:45 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:45.567912 | orchestrator | 2026-03-17 01:06:45 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:45.569778 | orchestrator | 2026-03-17 01:06:45 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:45.570285 | orchestrator | 2026-03-17 01:06:45 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:45.570315 | orchestrator | 2026-03-17 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:48.604328 | orchestrator | 2026-03-17 01:06:48 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:48.605380 | orchestrator | 2026-03-17 01:06:48 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:48.608008 | orchestrator | 2026-03-17 01:06:48 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:48.608488 | orchestrator | 2026-03-17 01:06:48 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:48.609032 | orchestrator | 2026-03-17 01:06:48 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:48.609067 | orchestrator | 2026-03-17 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:51.641997 | orchestrator | 2026-03-17 01:06:51 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:51.642696 | orchestrator | 2026-03-17 01:06:51 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:51.643202 | orchestrator | 2026-03-17 01:06:51 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:51.643894 | orchestrator | 2026-03-17 01:06:51 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:51.644595 | orchestrator | 2026-03-17 01:06:51 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:51.644616 | orchestrator | 2026-03-17 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:54.675124 | orchestrator | 2026-03-17 01:06:54 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:54.675673 | orchestrator | 2026-03-17 01:06:54 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:54.676696 | orchestrator | 2026-03-17 01:06:54 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:54.678691 | orchestrator | 2026-03-17 01:06:54 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:54.678996 | orchestrator | 2026-03-17 01:06:54 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:54.679294 | orchestrator | 2026-03-17 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:06:57.710009 | orchestrator | 2026-03-17 01:06:57 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:06:57.710738 | orchestrator | 2026-03-17 01:06:57 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:06:57.711654 | orchestrator | 2026-03-17 01:06:57 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:06:57.712890 | orchestrator | 2026-03-17 01:06:57 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:06:57.714709 | orchestrator | 2026-03-17 01:06:57 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:06:57.714740 | orchestrator | 2026-03-17 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:00.741146 | orchestrator | 2026-03-17 01:07:00 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:07:00.741815 | orchestrator | 2026-03-17 01:07:00 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:00.742622 | orchestrator | 2026-03-17 01:07:00 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:00.743286 | orchestrator | 2026-03-17 01:07:00 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:00.744236 | orchestrator | 2026-03-17 01:07:00 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:00.744264 | orchestrator | 2026-03-17 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:03.768609 | orchestrator | 2026-03-17 01:07:03 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:07:03.768750 | orchestrator | 2026-03-17 01:07:03 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:03.769431 | orchestrator | 2026-03-17 01:07:03 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:03.770070 | orchestrator | 2026-03-17 01:07:03 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:03.770628 | orchestrator | 2026-03-17 01:07:03 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:03.770701 | orchestrator | 2026-03-17 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:06.791781 | orchestrator | 2026-03-17 01:07:06 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:07:06.792388 | orchestrator | 2026-03-17 01:07:06 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:06.793006 | orchestrator | 2026-03-17 01:07:06 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:06.793751 | orchestrator | 2026-03-17 01:07:06 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:06.794323 | orchestrator | 2026-03-17 01:07:06 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:06.794347 | orchestrator | 2026-03-17 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:09.818901 | orchestrator | 2026-03-17 01:07:09 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:07:09.819609 | orchestrator | 2026-03-17 01:07:09 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:09.820633 | orchestrator | 2026-03-17 01:07:09 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:09.821403 | orchestrator | 2026-03-17 01:07:09 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:09.822054 | orchestrator | 2026-03-17 01:07:09 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:09.822086 | orchestrator | 2026-03-17 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:12.845171 | orchestrator | 2026-03-17 01:07:12 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:07:12.845535 | orchestrator | 2026-03-17 01:07:12 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:12.847288 | orchestrator | 2026-03-17 01:07:12 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:12.848015 | orchestrator | 2026-03-17 01:07:12 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:12.848895 | orchestrator | 2026-03-17 01:07:12 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:12.848916 | orchestrator | 2026-03-17 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:15.878736 | orchestrator | 2026-03-17 01:07:15 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:07:15.879028 | orchestrator | 2026-03-17 01:07:15 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:15.879677 | orchestrator | 2026-03-17 01:07:15 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:15.880306 | orchestrator | 2026-03-17 01:07:15 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:15.880956 | orchestrator | 2026-03-17 01:07:15 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:15.880990 | orchestrator | 2026-03-17 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:18.913047 | orchestrator | 2026-03-17 01:07:18 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:07:18.913878 | orchestrator | 2026-03-17 01:07:18 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:18.914534 | orchestrator | 2026-03-17 01:07:18 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:18.915356 | orchestrator | 2026-03-17 01:07:18 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:18.915988 | orchestrator | 2026-03-17 01:07:18 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:18.916014 | orchestrator | 2026-03-17 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:21.948521 | orchestrator | 2026-03-17 01:07:21 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state STARTED 2026-03-17 01:07:21.948588 | orchestrator | 2026-03-17 01:07:21 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:21.950865 | orchestrator | 2026-03-17 01:07:21 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:21.951510 | orchestrator | 2026-03-17 01:07:21 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:21.952211 | orchestrator | 2026-03-17 01:07:21 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:21.952247 | orchestrator | 2026-03-17 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:24.983537 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task f7a54483-0ca6-4307-b9ae-a6f4b8962d60 is in state SUCCESS 2026-03-17 01:07:24.985898 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:24.986584 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:24.987097 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:24.987815 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:24.988500 | orchestrator | 2026-03-17 01:07:24 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:24.988525 | orchestrator | 2026-03-17 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:28.015879 | orchestrator | 2026-03-17 01:07:28 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:28.017709 | orchestrator | 2026-03-17 01:07:28 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:28.018226 | orchestrator | 2026-03-17 01:07:28 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:28.018832 | orchestrator | 2026-03-17 01:07:28 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:28.019487 | orchestrator | 2026-03-17 01:07:28 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:28.020096 | orchestrator | 2026-03-17 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:31.119501 | orchestrator | 2026-03-17 01:07:31 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:31.120608 | orchestrator | 2026-03-17 01:07:31 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:31.121257 | orchestrator | 2026-03-17 01:07:31 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:31.122011 | orchestrator | 2026-03-17 01:07:31 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:31.125055 | orchestrator | 2026-03-17 01:07:31 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:31.125108 | orchestrator | 2026-03-17 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:34.149869 | orchestrator | 2026-03-17 01:07:34 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:34.150819 | orchestrator | 2026-03-17 01:07:34 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:34.152381 | orchestrator | 2026-03-17 01:07:34 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:34.153016 | orchestrator | 2026-03-17 01:07:34 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:34.153720 | orchestrator | 2026-03-17 01:07:34 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:34.154264 | orchestrator | 2026-03-17 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:37.183812 | orchestrator | 2026-03-17 01:07:37 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:37.185665 | orchestrator | 2026-03-17 01:07:37 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:37.185719 | orchestrator | 2026-03-17 01:07:37 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:37.185726 | orchestrator | 2026-03-17 01:07:37 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:37.186713 | orchestrator | 2026-03-17 01:07:37 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:37.186770 | orchestrator | 2026-03-17 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:40.214197 | orchestrator | 2026-03-17 01:07:40 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:40.215723 | orchestrator | 2026-03-17 01:07:40 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state STARTED 2026-03-17 01:07:40.216740 | orchestrator | 2026-03-17 01:07:40 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:40.217410 | orchestrator | 2026-03-17 01:07:40 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:40.218167 | orchestrator | 2026-03-17 01:07:40 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:40.218287 | orchestrator | 2026-03-17 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:43.244361 | orchestrator | 2026-03-17 01:07:43 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:43.244515 | orchestrator | 2026-03-17 01:07:43 | INFO  | Task e9ddd0f8-0438-4aa1-9478-8f3dfe78918a is in state SUCCESS 2026-03-17 01:07:43.244988 | orchestrator | 2026-03-17 01:07:43.245037 | orchestrator | 2026-03-17 01:07:43.245048 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-17 01:07:43.245058 | orchestrator | 2026-03-17 01:07:43.245067 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-17 01:07:43.245076 | orchestrator | Tuesday 17 March 2026 01:05:17 +0000 (0:00:00.285) 0:00:00.285 ********* 2026-03-17 01:07:43.245085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-17 01:07:43.245095 | orchestrator | 2026-03-17 01:07:43.245101 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-17 01:07:43.245106 | orchestrator | Tuesday 17 March 2026 01:05:17 +0000 (0:00:00.220) 0:00:00.506 ********* 2026-03-17 01:07:43.245112 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-17 01:07:43.245117 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-17 01:07:43.245123 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-17 01:07:43.245128 | orchestrator | 2026-03-17 01:07:43.245134 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-17 01:07:43.245139 | orchestrator | Tuesday 17 March 2026 01:05:18 +0000 (0:00:01.439) 0:00:01.945 ********* 2026-03-17 01:07:43.245144 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-17 01:07:43.245149 | orchestrator | 2026-03-17 01:07:43.245155 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-17 01:07:43.245160 | orchestrator | Tuesday 17 March 2026 01:05:19 +0000 (0:00:01.072) 0:00:03.018 ********* 2026-03-17 01:07:43.245165 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.245171 | orchestrator | 2026-03-17 01:07:43.245177 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-17 01:07:43.245182 | orchestrator | Tuesday 17 March 2026 01:05:20 +0000 (0:00:00.800) 0:00:03.818 ********* 2026-03-17 01:07:43.245196 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.245202 | orchestrator | 2026-03-17 01:07:43.245219 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-17 01:07:43.245225 | orchestrator | Tuesday 17 March 2026 01:05:21 +0000 (0:00:00.835) 0:00:04.654 ********* 2026-03-17 01:07:43.245230 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-17 01:07:43.245236 | orchestrator | ok: [testbed-manager] 2026-03-17 01:07:43.245241 | orchestrator | 2026-03-17 01:07:43.245246 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-17 01:07:43.245264 | orchestrator | Tuesday 17 March 2026 01:05:59 +0000 (0:00:37.684) 0:00:42.338 ********* 2026-03-17 01:07:43.245270 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-17 01:07:43.245343 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-17 01:07:43.245351 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-17 01:07:43.245356 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-17 01:07:43.245362 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-17 01:07:43.245367 | orchestrator | 2026-03-17 01:07:43.245372 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-17 01:07:43.245377 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:04.255) 0:00:46.594 ********* 2026-03-17 01:07:43.245430 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-17 01:07:43.245487 | orchestrator | 2026-03-17 01:07:43.245497 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-17 01:07:43.245502 | orchestrator | Tuesday 17 March 2026 01:06:04 +0000 (0:00:00.759) 0:00:47.354 ********* 2026-03-17 01:07:43.245508 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:43.245513 | orchestrator | 2026-03-17 01:07:43.245518 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-17 01:07:43.245524 | orchestrator | Tuesday 17 March 2026 01:06:04 +0000 (0:00:00.112) 0:00:47.466 ********* 2026-03-17 01:07:43.245529 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:43.245535 | orchestrator | 2026-03-17 01:07:43.245540 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-17 01:07:43.245546 | orchestrator | Tuesday 17 March 2026 01:06:04 +0000 (0:00:00.297) 0:00:47.764 ********* 2026-03-17 01:07:43.245551 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.245556 | orchestrator | 2026-03-17 01:07:43.245562 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-17 01:07:43.245567 | orchestrator | Tuesday 17 March 2026 01:06:06 +0000 (0:00:01.333) 0:00:49.098 ********* 2026-03-17 01:07:43.245572 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.245578 | orchestrator | 2026-03-17 01:07:43.245583 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-17 01:07:43.245588 | orchestrator | Tuesday 17 March 2026 01:06:06 +0000 (0:00:00.711) 0:00:49.809 ********* 2026-03-17 01:07:43.245594 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.245600 | orchestrator | 2026-03-17 01:07:43.245606 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-17 01:07:43.245611 | orchestrator | Tuesday 17 March 2026 01:06:07 +0000 (0:00:00.500) 0:00:50.310 ********* 2026-03-17 01:07:43.245616 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-17 01:07:43.245621 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-17 01:07:43.245627 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-17 01:07:43.245633 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-17 01:07:43.245638 | orchestrator | 2026-03-17 01:07:43.245646 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:07:43.245667 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-17 01:07:43.245680 | orchestrator | 2026-03-17 01:07:43.245689 | orchestrator | 2026-03-17 01:07:43.245709 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:07:43.245719 | orchestrator | Tuesday 17 March 2026 01:06:08 +0000 (0:00:01.320) 0:00:51.631 ********* 2026-03-17 01:07:43.245727 | orchestrator | =============================================================================== 2026-03-17 01:07:43.245735 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.68s 2026-03-17 01:07:43.245745 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.26s 2026-03-17 01:07:43.245754 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.44s 2026-03-17 01:07:43.245777 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.33s 2026-03-17 01:07:43.245787 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.32s 2026-03-17 01:07:43.245796 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.07s 2026-03-17 01:07:43.245808 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.84s 2026-03-17 01:07:43.245820 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.80s 2026-03-17 01:07:43.245828 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.76s 2026-03-17 01:07:43.245837 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2026-03-17 01:07:43.245845 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.50s 2026-03-17 01:07:43.245853 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2026-03-17 01:07:43.245861 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-03-17 01:07:43.245869 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2026-03-17 01:07:43.245877 | orchestrator | 2026-03-17 01:07:43.245885 | orchestrator | 2026-03-17 01:07:43.245892 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-17 01:07:43.245899 | orchestrator | 2026-03-17 01:07:43.245907 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-17 01:07:43.245914 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.144) 0:00:00.144 ********* 2026-03-17 01:07:43.245921 | orchestrator | changed: [localhost] 2026-03-17 01:07:43.245928 | orchestrator | 2026-03-17 01:07:43.245936 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-17 01:07:43.245944 | orchestrator | Tuesday 17 March 2026 01:06:04 +0000 (0:00:01.011) 0:00:01.156 ********* 2026-03-17 01:07:43.245951 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-17 01:07:43.245959 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-03-17 01:07:43.245968 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-03-17 01:07:43.245979 | orchestrator | fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "dest": "/share/ironic/ironic/ironic-agent.initramfs", "elapsed": 10, "msg": "Request failed: ", "url": "https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/ipa-centos9-stable-2025.1.initramfs"} 2026-03-17 01:07:43.245990 | orchestrator | 2026-03-17 01:07:43.246000 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:07:43.246149 | orchestrator | localhost : ok=1  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2026-03-17 01:07:43.246166 | orchestrator | 2026-03-17 01:07:43.246177 | orchestrator | 2026-03-17 01:07:43.246186 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:07:43.246195 | orchestrator | Tuesday 17 March 2026 01:07:22 +0000 (0:01:17.971) 0:01:19.127 ********* 2026-03-17 01:07:43.246205 | orchestrator | =============================================================================== 2026-03-17 01:07:43.246214 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 77.97s 2026-03-17 01:07:43.246224 | orchestrator | Ensure the destination directory exists --------------------------------- 1.01s 2026-03-17 01:07:43.246233 | orchestrator | 2026-03-17 01:07:43.246243 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-17 01:07:43.246253 | orchestrator | 2.16.14 2026-03-17 01:07:43.246263 | orchestrator | 2026-03-17 01:07:43.246273 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-17 01:07:43.246284 | orchestrator | 2026-03-17 01:07:43.246293 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-17 01:07:43.246312 | orchestrator | Tuesday 17 March 2026 01:06:12 +0000 (0:00:00.208) 0:00:00.208 ********* 2026-03-17 01:07:43.246323 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.246332 | orchestrator | 2026-03-17 01:07:43.246341 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-17 01:07:43.246351 | orchestrator | Tuesday 17 March 2026 01:06:14 +0000 (0:00:01.643) 0:00:01.851 ********* 2026-03-17 01:07:43.246360 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.246369 | orchestrator | 2026-03-17 01:07:43.246378 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-17 01:07:43.246428 | orchestrator | Tuesday 17 March 2026 01:06:15 +0000 (0:00:00.967) 0:00:02.818 ********* 2026-03-17 01:07:43.246437 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.246447 | orchestrator | 2026-03-17 01:07:43.246456 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-17 01:07:43.246471 | orchestrator | Tuesday 17 March 2026 01:06:16 +0000 (0:00:00.910) 0:00:03.728 ********* 2026-03-17 01:07:43.246481 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.246489 | orchestrator | 2026-03-17 01:07:43.246498 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-17 01:07:43.246519 | orchestrator | Tuesday 17 March 2026 01:06:17 +0000 (0:00:01.003) 0:00:04.732 ********* 2026-03-17 01:07:43.246529 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.246538 | orchestrator | 2026-03-17 01:07:43.246547 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-17 01:07:43.246556 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:00.931) 0:00:05.663 ********* 2026-03-17 01:07:43.246565 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.246574 | orchestrator | 2026-03-17 01:07:43.246583 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-17 01:07:43.246591 | orchestrator | Tuesday 17 March 2026 01:06:19 +0000 (0:00:00.911) 0:00:06.575 ********* 2026-03-17 01:07:43.246600 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.246608 | orchestrator | 2026-03-17 01:07:43.246617 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-17 01:07:43.246627 | orchestrator | Tuesday 17 March 2026 01:06:20 +0000 (0:00:01.142) 0:00:07.718 ********* 2026-03-17 01:07:43.246636 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.246645 | orchestrator | 2026-03-17 01:07:43.246653 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-17 01:07:43.246662 | orchestrator | Tuesday 17 March 2026 01:06:21 +0000 (0:00:01.044) 0:00:08.762 ********* 2026-03-17 01:07:43.246672 | orchestrator | changed: [testbed-manager] 2026-03-17 01:07:43.246681 | orchestrator | 2026-03-17 01:07:43.246691 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-17 01:07:43.246841 | orchestrator | Tuesday 17 March 2026 01:07:16 +0000 (0:00:55.063) 0:01:03.826 ********* 2026-03-17 01:07:43.246859 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:07:43.246869 | orchestrator | 2026-03-17 01:07:43.246878 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-17 01:07:43.246887 | orchestrator | 2026-03-17 01:07:43.246897 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-17 01:07:43.246907 | orchestrator | Tuesday 17 March 2026 01:07:16 +0000 (0:00:00.103) 0:01:03.930 ********* 2026-03-17 01:07:43.246917 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:43.246926 | orchestrator | 2026-03-17 01:07:43.246934 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-17 01:07:43.246943 | orchestrator | 2026-03-17 01:07:43.246951 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-17 01:07:43.246960 | orchestrator | Tuesday 17 March 2026 01:07:28 +0000 (0:00:11.811) 0:01:15.742 ********* 2026-03-17 01:07:43.246968 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:43.246977 | orchestrator | 2026-03-17 01:07:43.246985 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-17 01:07:43.247004 | orchestrator | 2026-03-17 01:07:43.247012 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-17 01:07:43.247041 | orchestrator | Tuesday 17 March 2026 01:07:29 +0000 (0:00:01.520) 0:01:17.262 ********* 2026-03-17 01:07:43.247052 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:43.247061 | orchestrator | 2026-03-17 01:07:43.247071 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:07:43.247081 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-17 01:07:43.247092 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:07:43.247102 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:07:43.247111 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:07:43.247120 | orchestrator | 2026-03-17 01:07:43.247130 | orchestrator | 2026-03-17 01:07:43.247140 | orchestrator | 2026-03-17 01:07:43.247149 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:07:43.247160 | orchestrator | Tuesday 17 March 2026 01:07:41 +0000 (0:00:11.279) 0:01:28.542 ********* 2026-03-17 01:07:43.247170 | orchestrator | =============================================================================== 2026-03-17 01:07:43.247179 | orchestrator | Create admin user ------------------------------------------------------ 55.06s 2026-03-17 01:07:43.247189 | orchestrator | Restart ceph manager service ------------------------------------------- 24.61s 2026-03-17 01:07:43.247198 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.64s 2026-03-17 01:07:43.247241 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.14s 2026-03-17 01:07:43.247253 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.04s 2026-03-17 01:07:43.247262 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.00s 2026-03-17 01:07:43.247272 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.97s 2026-03-17 01:07:43.247281 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.93s 2026-03-17 01:07:43.247291 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.91s 2026-03-17 01:07:43.247300 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.91s 2026-03-17 01:07:43.247309 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.10s 2026-03-17 01:07:43.247319 | orchestrator | 2026-03-17 01:07:43 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:43.247336 | orchestrator | 2026-03-17 01:07:43 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:43.247353 | orchestrator | 2026-03-17 01:07:43 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:43.247363 | orchestrator | 2026-03-17 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:46.282657 | orchestrator | 2026-03-17 01:07:46 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:46.283307 | orchestrator | 2026-03-17 01:07:46 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:46.284346 | orchestrator | 2026-03-17 01:07:46 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:46.285286 | orchestrator | 2026-03-17 01:07:46 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:46.285351 | orchestrator | 2026-03-17 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:49.317426 | orchestrator | 2026-03-17 01:07:49 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:49.318230 | orchestrator | 2026-03-17 01:07:49 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:49.318724 | orchestrator | 2026-03-17 01:07:49 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:49.320445 | orchestrator | 2026-03-17 01:07:49 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:49.320469 | orchestrator | 2026-03-17 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:52.351514 | orchestrator | 2026-03-17 01:07:52 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:52.351566 | orchestrator | 2026-03-17 01:07:52 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state STARTED 2026-03-17 01:07:52.351571 | orchestrator | 2026-03-17 01:07:52 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:52.351575 | orchestrator | 2026-03-17 01:07:52 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:52.351578 | orchestrator | 2026-03-17 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:55.369806 | orchestrator | 2026-03-17 01:07:55 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:55.370977 | orchestrator | 2026-03-17 01:07:55 | INFO  | Task c7e5e29a-a59a-4f54-a513-60bace0abad8 is in state SUCCESS 2026-03-17 01:07:55.372233 | orchestrator | 2026-03-17 01:07:55.372279 | orchestrator | 2026-03-17 01:07:55.372290 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:07:55.372298 | orchestrator | 2026-03-17 01:07:55.372303 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:07:55.372309 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.373) 0:00:00.373 ********* 2026-03-17 01:07:55.372315 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:07:55.372322 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:07:55.372328 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:07:55.372334 | orchestrator | 2026-03-17 01:07:55.372339 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:07:55.372346 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.396) 0:00:00.769 ********* 2026-03-17 01:07:55.372353 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-17 01:07:55.372425 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-17 01:07:55.372434 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-17 01:07:55.372440 | orchestrator | 2026-03-17 01:07:55.372446 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-17 01:07:55.372497 | orchestrator | 2026-03-17 01:07:55.372507 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-17 01:07:55.372514 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.403) 0:00:01.173 ********* 2026-03-17 01:07:55.372519 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:07:55.372524 | orchestrator | 2026-03-17 01:07:55.372528 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-03-17 01:07:55.372532 | orchestrator | Tuesday 17 March 2026 01:06:04 +0000 (0:00:00.613) 0:00:01.787 ********* 2026-03-17 01:07:55.372558 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-17 01:07:55.372563 | orchestrator | 2026-03-17 01:07:55.372567 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-03-17 01:07:55.372573 | orchestrator | Tuesday 17 March 2026 01:06:08 +0000 (0:00:03.966) 0:00:05.753 ********* 2026-03-17 01:07:55.372579 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-17 01:07:55.372599 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-17 01:07:55.372605 | orchestrator | 2026-03-17 01:07:55.372612 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-17 01:07:55.372625 | orchestrator | Tuesday 17 March 2026 01:06:14 +0000 (0:00:06.206) 0:00:11.959 ********* 2026-03-17 01:07:55.372632 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-17 01:07:55.372639 | orchestrator | 2026-03-17 01:07:55.372810 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-17 01:07:55.372819 | orchestrator | Tuesday 17 March 2026 01:06:17 +0000 (0:00:03.180) 0:00:15.140 ********* 2026-03-17 01:07:55.372823 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-17 01:07:55.372827 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:07:55.372831 | orchestrator | 2026-03-17 01:07:55.372834 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-17 01:07:55.372838 | orchestrator | Tuesday 17 March 2026 01:06:21 +0000 (0:00:03.648) 0:00:18.788 ********* 2026-03-17 01:07:55.372842 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:07:55.372846 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-17 01:07:55.372850 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-17 01:07:55.372854 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-17 01:07:55.372858 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-17 01:07:55.372862 | orchestrator | 2026-03-17 01:07:55.372866 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-03-17 01:07:55.372869 | orchestrator | Tuesday 17 March 2026 01:06:36 +0000 (0:00:15.050) 0:00:33.839 ********* 2026-03-17 01:07:55.372873 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-17 01:07:55.372877 | orchestrator | 2026-03-17 01:07:55.372881 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-17 01:07:55.372885 | orchestrator | Tuesday 17 March 2026 01:06:40 +0000 (0:00:03.835) 0:00:37.674 ********* 2026-03-17 01:07:55.372891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.372919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.372927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.372946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.372958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.372965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.372993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373019 | orchestrator | 2026-03-17 01:07:55.373028 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-17 01:07:55.373033 | orchestrator | Tuesday 17 March 2026 01:06:43 +0000 (0:00:03.274) 0:00:40.948 ********* 2026-03-17 01:07:55.373037 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-17 01:07:55.373040 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-17 01:07:55.373044 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-17 01:07:55.373048 | orchestrator | 2026-03-17 01:07:55.373052 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-17 01:07:55.373056 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:01.487) 0:00:42.435 ********* 2026-03-17 01:07:55.373060 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:55.373063 | orchestrator | 2026-03-17 01:07:55.373067 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-17 01:07:55.373071 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:00.140) 0:00:42.576 ********* 2026-03-17 01:07:55.373075 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:55.373079 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:55.373083 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:55.373087 | orchestrator | 2026-03-17 01:07:55.373090 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-17 01:07:55.373094 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:00.226) 0:00:42.803 ********* 2026-03-17 01:07:55.373098 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:07:55.373102 | orchestrator | 2026-03-17 01:07:55.373106 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-17 01:07:55.373110 | orchestrator | Tuesday 17 March 2026 01:06:46 +0000 (0:00:00.555) 0:00:43.358 ********* 2026-03-17 01:07:55.373114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373188 | orchestrator | 2026-03-17 01:07:55.373194 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-17 01:07:55.373198 | orchestrator | Tuesday 17 March 2026 01:06:49 +0000 (0:00:02.987) 0:00:46.346 ********* 2026-03-17 01:07:55.373204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.373210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373236 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:55.373243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.373253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373324 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:55.373329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.373344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373356 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:55.373372 | orchestrator | 2026-03-17 01:07:55.373380 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-17 01:07:55.373387 | orchestrator | Tuesday 17 March 2026 01:06:49 +0000 (0:00:00.568) 0:00:46.915 ********* 2026-03-17 01:07:55.373402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.373409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373426 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:55.373436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.373443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373455 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:55.373465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.373472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373490 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:55.373496 | orchestrator | 2026-03-17 01:07:55.373502 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-17 01:07:55.373508 | orchestrator | Tuesday 17 March 2026 01:06:50 +0000 (0:00:01.313) 0:00:48.229 ********* 2026-03-17 01:07:55.373519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373575 | orchestrator | 2026-03-17 01:07:55.373579 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-17 01:07:55.373583 | orchestrator | Tuesday 17 March 2026 01:06:55 +0000 (0:00:04.458) 0:00:52.687 ********* 2026-03-17 01:07:55.373587 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:55.373591 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:55.373595 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:55.373600 | orchestrator | 2026-03-17 01:07:55.373606 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-17 01:07:55.373612 | orchestrator | Tuesday 17 March 2026 01:06:57 +0000 (0:00:01.701) 0:00:54.389 ********* 2026-03-17 01:07:55.373619 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:07:55.373626 | orchestrator | 2026-03-17 01:07:55.373632 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-17 01:07:55.373637 | orchestrator | Tuesday 17 March 2026 01:06:58 +0000 (0:00:01.552) 0:00:55.941 ********* 2026-03-17 01:07:55.373641 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:55.373646 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:55.373653 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:55.373659 | orchestrator | 2026-03-17 01:07:55.373665 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-17 01:07:55.373671 | orchestrator | Tuesday 17 March 2026 01:06:59 +0000 (0:00:00.803) 0:00:56.745 ********* 2026-03-17 01:07:55.373681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373763 | orchestrator | 2026-03-17 01:07:55.373770 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-17 01:07:55.373776 | orchestrator | Tuesday 17 March 2026 01:07:07 +0000 (0:00:08.476) 0:01:05.222 ********* 2026-03-17 01:07:55.373784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.373795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373807 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:55.373812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.373824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373838 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:55.373845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.373856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.373869 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:55.373880 | orchestrator | 2026-03-17 01:07:55.373886 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-03-17 01:07:55.373891 | orchestrator | Tuesday 17 March 2026 01:07:09 +0000 (0:00:01.172) 0:01:06.394 ********* 2026-03-17 01:07:55.373899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:07:55.373916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.373933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.374004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.374068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:07:55.374076 | orchestrator | 2026-03-17 01:07:55.374080 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-03-17 01:07:55.374084 | orchestrator | Tuesday 17 March 2026 01:07:12 +0000 (0:00:03.314) 0:01:09.709 ********* 2026-03-17 01:07:55.374088 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:07:55.374092 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:07:55.374096 | orchestrator | } 2026-03-17 01:07:55.374100 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:07:55.374104 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:07:55.374108 | orchestrator | } 2026-03-17 01:07:55.374112 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:07:55.374116 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:07:55.374123 | orchestrator | } 2026-03-17 01:07:55.374127 | orchestrator | 2026-03-17 01:07:55.374131 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:07:55.374135 | orchestrator | Tuesday 17 March 2026 01:07:12 +0000 (0:00:00.350) 0:01:10.060 ********* 2026-03-17 01:07:55.374140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.374147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.374152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.374156 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:55.374164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.374169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.374176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.374180 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:55.374186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:07:55.374190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.374194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:07:55.374198 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:55.374203 | orchestrator | 2026-03-17 01:07:55.374206 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-17 01:07:55.374210 | orchestrator | Tuesday 17 March 2026 01:07:14 +0000 (0:00:01.706) 0:01:11.766 ********* 2026-03-17 01:07:55.374214 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:07:55.374218 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:07:55.374222 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:07:55.374229 | orchestrator | 2026-03-17 01:07:55.374235 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-17 01:07:55.374248 | orchestrator | Tuesday 17 March 2026 01:07:14 +0000 (0:00:00.473) 0:01:12.240 ********* 2026-03-17 01:07:55.374255 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:55.374261 | orchestrator | 2026-03-17 01:07:55.374267 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-17 01:07:55.374273 | orchestrator | Tuesday 17 March 2026 01:07:17 +0000 (0:00:02.191) 0:01:14.432 ********* 2026-03-17 01:07:55.374280 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:55.374286 | orchestrator | 2026-03-17 01:07:55.374292 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-17 01:07:55.374299 | orchestrator | Tuesday 17 March 2026 01:07:19 +0000 (0:00:02.502) 0:01:16.934 ********* 2026-03-17 01:07:55.374305 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:55.374311 | orchestrator | 2026-03-17 01:07:55.374317 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-17 01:07:55.374323 | orchestrator | Tuesday 17 March 2026 01:07:31 +0000 (0:00:11.411) 0:01:28.346 ********* 2026-03-17 01:07:55.374328 | orchestrator | 2026-03-17 01:07:55.374334 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-17 01:07:55.374340 | orchestrator | Tuesday 17 March 2026 01:07:31 +0000 (0:00:00.117) 0:01:28.464 ********* 2026-03-17 01:07:55.374346 | orchestrator | 2026-03-17 01:07:55.374352 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-17 01:07:55.374358 | orchestrator | Tuesday 17 March 2026 01:07:31 +0000 (0:00:00.209) 0:01:28.674 ********* 2026-03-17 01:07:55.374456 | orchestrator | 2026-03-17 01:07:55.374463 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-17 01:07:55.374467 | orchestrator | Tuesday 17 March 2026 01:07:31 +0000 (0:00:00.174) 0:01:28.848 ********* 2026-03-17 01:07:55.374471 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:55.374475 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:55.374479 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:55.374482 | orchestrator | 2026-03-17 01:07:55.374486 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-17 01:07:55.374490 | orchestrator | Tuesday 17 March 2026 01:07:42 +0000 (0:00:10.902) 0:01:39.750 ********* 2026-03-17 01:07:55.374494 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:55.374498 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:55.374502 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:55.374505 | orchestrator | 2026-03-17 01:07:55.374509 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-17 01:07:55.374513 | orchestrator | Tuesday 17 March 2026 01:07:46 +0000 (0:00:04.290) 0:01:44.041 ********* 2026-03-17 01:07:55.374517 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:07:55.374521 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:07:55.374524 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:07:55.374529 | orchestrator | 2026-03-17 01:07:55.374539 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:07:55.374551 | orchestrator | testbed-node-0 : ok=25  changed=20  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 01:07:55.374558 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:07:55.374565 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:07:55.374571 | orchestrator | 2026-03-17 01:07:55.374577 | orchestrator | 2026-03-17 01:07:55.374583 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:07:55.374589 | orchestrator | Tuesday 17 March 2026 01:07:52 +0000 (0:00:05.352) 0:01:49.394 ********* 2026-03-17 01:07:55.374595 | orchestrator | =============================================================================== 2026-03-17 01:07:55.374607 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.05s 2026-03-17 01:07:55.374613 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.41s 2026-03-17 01:07:55.374619 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.90s 2026-03-17 01:07:55.374625 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.48s 2026-03-17 01:07:55.374631 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 6.21s 2026-03-17 01:07:55.374637 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.35s 2026-03-17 01:07:55.374643 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.46s 2026-03-17 01:07:55.374650 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.29s 2026-03-17 01:07:55.374656 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 3.97s 2026-03-17 01:07:55.374662 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 3.84s 2026-03-17 01:07:55.374669 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.65s 2026-03-17 01:07:55.374675 | orchestrator | service-check-containers : barbican | Check containers ------------------ 3.31s 2026-03-17 01:07:55.374682 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.28s 2026-03-17 01:07:55.374688 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.18s 2026-03-17 01:07:55.374695 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 2.99s 2026-03-17 01:07:55.374702 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.50s 2026-03-17 01:07:55.374708 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.19s 2026-03-17 01:07:55.374714 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.71s 2026-03-17 01:07:55.374721 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.70s 2026-03-17 01:07:55.374735 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.55s 2026-03-17 01:07:55.374741 | orchestrator | 2026-03-17 01:07:55 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:07:55.374748 | orchestrator | 2026-03-17 01:07:55 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:55.374755 | orchestrator | 2026-03-17 01:07:55 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:55.374764 | orchestrator | 2026-03-17 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:07:58.395919 | orchestrator | 2026-03-17 01:07:58 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:07:58.396781 | orchestrator | 2026-03-17 01:07:58 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:07:58.397431 | orchestrator | 2026-03-17 01:07:58 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:07:58.399695 | orchestrator | 2026-03-17 01:07:58 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:07:58.399724 | orchestrator | 2026-03-17 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:01.424154 | orchestrator | 2026-03-17 01:08:01 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:01.424526 | orchestrator | 2026-03-17 01:08:01 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:01.425640 | orchestrator | 2026-03-17 01:08:01 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:01.426148 | orchestrator | 2026-03-17 01:08:01 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:01.426180 | orchestrator | 2026-03-17 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:04.456476 | orchestrator | 2026-03-17 01:08:04 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:04.457288 | orchestrator | 2026-03-17 01:08:04 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:04.457326 | orchestrator | 2026-03-17 01:08:04 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:04.457918 | orchestrator | 2026-03-17 01:08:04 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:04.458105 | orchestrator | 2026-03-17 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:07.482135 | orchestrator | 2026-03-17 01:08:07 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:07.485614 | orchestrator | 2026-03-17 01:08:07 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:07.485682 | orchestrator | 2026-03-17 01:08:07 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:07.485697 | orchestrator | 2026-03-17 01:08:07 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:07.485708 | orchestrator | 2026-03-17 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:10.511589 | orchestrator | 2026-03-17 01:08:10 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:10.511710 | orchestrator | 2026-03-17 01:08:10 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:10.512456 | orchestrator | 2026-03-17 01:08:10 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:10.513064 | orchestrator | 2026-03-17 01:08:10 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:10.513096 | orchestrator | 2026-03-17 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:13.545285 | orchestrator | 2026-03-17 01:08:13 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:13.545482 | orchestrator | 2026-03-17 01:08:13 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:13.546253 | orchestrator | 2026-03-17 01:08:13 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:13.546930 | orchestrator | 2026-03-17 01:08:13 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:13.546961 | orchestrator | 2026-03-17 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:16.593000 | orchestrator | 2026-03-17 01:08:16 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:16.595545 | orchestrator | 2026-03-17 01:08:16 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:16.596857 | orchestrator | 2026-03-17 01:08:16 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:16.598724 | orchestrator | 2026-03-17 01:08:16 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:16.598774 | orchestrator | 2026-03-17 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:19.651021 | orchestrator | 2026-03-17 01:08:19 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:19.651082 | orchestrator | 2026-03-17 01:08:19 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:19.652031 | orchestrator | 2026-03-17 01:08:19 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:19.653063 | orchestrator | 2026-03-17 01:08:19 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:19.653089 | orchestrator | 2026-03-17 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:22.689699 | orchestrator | 2026-03-17 01:08:22 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:22.692067 | orchestrator | 2026-03-17 01:08:22 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:22.693906 | orchestrator | 2026-03-17 01:08:22 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:22.695739 | orchestrator | 2026-03-17 01:08:22 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:22.695926 | orchestrator | 2026-03-17 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:25.747667 | orchestrator | 2026-03-17 01:08:25 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:25.749371 | orchestrator | 2026-03-17 01:08:25 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:25.750854 | orchestrator | 2026-03-17 01:08:25 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:25.751251 | orchestrator | 2026-03-17 01:08:25 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:25.751365 | orchestrator | 2026-03-17 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:28.797727 | orchestrator | 2026-03-17 01:08:28 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:28.798889 | orchestrator | 2026-03-17 01:08:28 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:28.800297 | orchestrator | 2026-03-17 01:08:28 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:28.801751 | orchestrator | 2026-03-17 01:08:28 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:28.801811 | orchestrator | 2026-03-17 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:31.854693 | orchestrator | 2026-03-17 01:08:31 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:31.857411 | orchestrator | 2026-03-17 01:08:31 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:31.859176 | orchestrator | 2026-03-17 01:08:31 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:31.860809 | orchestrator | 2026-03-17 01:08:31 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:31.860860 | orchestrator | 2026-03-17 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:34.899006 | orchestrator | 2026-03-17 01:08:34 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:34.899518 | orchestrator | 2026-03-17 01:08:34 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:34.900838 | orchestrator | 2026-03-17 01:08:34 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:34.901549 | orchestrator | 2026-03-17 01:08:34 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:34.901698 | orchestrator | 2026-03-17 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:37.956964 | orchestrator | 2026-03-17 01:08:37 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:37.958636 | orchestrator | 2026-03-17 01:08:37 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:37.961068 | orchestrator | 2026-03-17 01:08:37 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:37.963443 | orchestrator | 2026-03-17 01:08:37 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state STARTED 2026-03-17 01:08:37.963500 | orchestrator | 2026-03-17 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:41.012807 | orchestrator | 2026-03-17 01:08:41 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:41.012864 | orchestrator | 2026-03-17 01:08:41 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:41.012873 | orchestrator | 2026-03-17 01:08:41 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:41.013450 | orchestrator | 2026-03-17 01:08:41 | INFO  | Task 91347334-4402-4f8c-a0e9-b81c40404a0c is in state SUCCESS 2026-03-17 01:08:41.018663 | orchestrator | 2026-03-17 01:08:41.018725 | orchestrator | 2026-03-17 01:08:41.018732 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:08:41.018736 | orchestrator | 2026-03-17 01:08:41.018740 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:08:41.018743 | orchestrator | Tuesday 17 March 2026 01:07:27 +0000 (0:00:00.593) 0:00:00.593 ********* 2026-03-17 01:08:41.018747 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:08:41.018751 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:08:41.018754 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:08:41.018757 | orchestrator | 2026-03-17 01:08:41.018761 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:08:41.018764 | orchestrator | Tuesday 17 March 2026 01:07:27 +0000 (0:00:00.305) 0:00:00.898 ********* 2026-03-17 01:08:41.018768 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-17 01:08:41.018771 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-17 01:08:41.018775 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-17 01:08:41.018778 | orchestrator | 2026-03-17 01:08:41.018781 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-17 01:08:41.018784 | orchestrator | 2026-03-17 01:08:41.018788 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-17 01:08:41.018799 | orchestrator | Tuesday 17 March 2026 01:07:28 +0000 (0:00:00.376) 0:00:01.275 ********* 2026-03-17 01:08:41.018802 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:41.018806 | orchestrator | 2026-03-17 01:08:41.018809 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-03-17 01:08:41.018812 | orchestrator | Tuesday 17 March 2026 01:07:28 +0000 (0:00:00.561) 0:00:01.837 ********* 2026-03-17 01:08:41.018816 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-17 01:08:41.018819 | orchestrator | 2026-03-17 01:08:41.018822 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-03-17 01:08:41.018825 | orchestrator | Tuesday 17 March 2026 01:07:32 +0000 (0:00:03.890) 0:00:05.727 ********* 2026-03-17 01:08:41.018828 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-17 01:08:41.018832 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-17 01:08:41.018835 | orchestrator | 2026-03-17 01:08:41.018838 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-17 01:08:41.018842 | orchestrator | Tuesday 17 March 2026 01:07:39 +0000 (0:00:06.261) 0:00:11.989 ********* 2026-03-17 01:08:41.018845 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:08:41.018848 | orchestrator | 2026-03-17 01:08:41.018852 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-17 01:08:41.018864 | orchestrator | Tuesday 17 March 2026 01:07:41 +0000 (0:00:02.874) 0:00:14.863 ********* 2026-03-17 01:08:41.018868 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-17 01:08:41.018871 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:08:41.018874 | orchestrator | 2026-03-17 01:08:41.018877 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-17 01:08:41.018881 | orchestrator | Tuesday 17 March 2026 01:07:45 +0000 (0:00:03.606) 0:00:18.470 ********* 2026-03-17 01:08:41.018884 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:08:41.018888 | orchestrator | 2026-03-17 01:08:41.018891 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-03-17 01:08:41.018894 | orchestrator | Tuesday 17 March 2026 01:07:48 +0000 (0:00:03.031) 0:00:21.501 ********* 2026-03-17 01:08:41.018897 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-17 01:08:41.018900 | orchestrator | 2026-03-17 01:08:41.018904 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-17 01:08:41.018907 | orchestrator | Tuesday 17 March 2026 01:07:52 +0000 (0:00:03.975) 0:00:25.477 ********* 2026-03-17 01:08:41.018910 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:41.018913 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:41.018916 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:41.018920 | orchestrator | 2026-03-17 01:08:41.018923 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-17 01:08:41.018926 | orchestrator | Tuesday 17 March 2026 01:07:52 +0000 (0:00:00.278) 0:00:25.755 ********* 2026-03-17 01:08:41.018940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.018947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.018952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.018959 | orchestrator | 2026-03-17 01:08:41.018963 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-17 01:08:41.018966 | orchestrator | Tuesday 17 March 2026 01:07:54 +0000 (0:00:01.980) 0:00:27.736 ********* 2026-03-17 01:08:41.018969 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:41.018973 | orchestrator | 2026-03-17 01:08:41.018982 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-17 01:08:41.018987 | orchestrator | Tuesday 17 March 2026 01:07:55 +0000 (0:00:00.260) 0:00:27.997 ********* 2026-03-17 01:08:41.018992 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:41.018997 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:41.019003 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:41.019009 | orchestrator | 2026-03-17 01:08:41.019012 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-17 01:08:41.019015 | orchestrator | Tuesday 17 March 2026 01:07:55 +0000 (0:00:00.384) 0:00:28.381 ********* 2026-03-17 01:08:41.019019 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:41.019022 | orchestrator | 2026-03-17 01:08:41.019025 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-17 01:08:41.019029 | orchestrator | Tuesday 17 March 2026 01:07:56 +0000 (0:00:00.750) 0:00:29.131 ********* 2026-03-17 01:08:41.019032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019053 | orchestrator | 2026-03-17 01:08:41.019057 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-17 01:08:41.019060 | orchestrator | Tuesday 17 March 2026 01:07:58 +0000 (0:00:01.937) 0:00:31.069 ********* 2026-03-17 01:08:41.019063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019067 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:41.019074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019077 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:41.019083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019089 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:41.019092 | orchestrator | 2026-03-17 01:08:41.019095 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-17 01:08:41.019099 | orchestrator | Tuesday 17 March 2026 01:07:58 +0000 (0:00:00.506) 0:00:31.576 ********* 2026-03-17 01:08:41.019102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019106 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:41.019109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019113 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:41.019119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019125 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:41.019128 | orchestrator | 2026-03-17 01:08:41.019132 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-17 01:08:41.019135 | orchestrator | Tuesday 17 March 2026 01:07:59 +0000 (0:00:00.839) 0:00:32.415 ********* 2026-03-17 01:08:41.019140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019151 | orchestrator | 2026-03-17 01:08:41.019155 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-17 01:08:41.019158 | orchestrator | Tuesday 17 March 2026 01:08:01 +0000 (0:00:01.906) 0:00:34.321 ********* 2026-03-17 01:08:41.019167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019179 | orchestrator | 2026-03-17 01:08:41.019182 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-17 01:08:41.019185 | orchestrator | Tuesday 17 March 2026 01:08:05 +0000 (0:00:03.716) 0:00:38.038 ********* 2026-03-17 01:08:41.019189 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-03-17 01:08:41.019192 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:41.019196 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-03-17 01:08:41.019201 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:41.019206 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-03-17 01:08:41.019214 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:41.019218 | orchestrator | 2026-03-17 01:08:41.019223 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-03-17 01:08:41.019227 | orchestrator | Tuesday 17 March 2026 01:08:05 +0000 (0:00:00.763) 0:00:38.801 ********* 2026-03-17 01:08:41.019233 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:41.019238 | orchestrator | 2026-03-17 01:08:41.019242 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-03-17 01:08:41.019250 | orchestrator | Tuesday 17 March 2026 01:08:06 +0000 (0:00:00.981) 0:00:39.782 ********* 2026-03-17 01:08:41.019256 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:41.019261 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:41.019266 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:41.019273 | orchestrator | 2026-03-17 01:08:41.019282 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-17 01:08:41.019288 | orchestrator | Tuesday 17 March 2026 01:08:08 +0000 (0:00:01.964) 0:00:41.747 ********* 2026-03-17 01:08:41.019293 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:41.019313 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:41.019318 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:41.019323 | orchestrator | 2026-03-17 01:08:41.019328 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-17 01:08:41.019334 | orchestrator | Tuesday 17 March 2026 01:08:10 +0000 (0:00:01.443) 0:00:43.191 ********* 2026-03-17 01:08:41.019342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019348 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:41.019354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019359 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:41.019365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019379 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:41.019384 | orchestrator | 2026-03-17 01:08:41.019391 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-03-17 01:08:41.019396 | orchestrator | Tuesday 17 March 2026 01:08:11 +0000 (0:00:01.651) 0:00:44.842 ********* 2026-03-17 01:08:41.019407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-17 01:08:41.019432 | orchestrator | 2026-03-17 01:08:41.019437 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-03-17 01:08:41.019442 | orchestrator | Tuesday 17 March 2026 01:08:13 +0000 (0:00:01.213) 0:00:46.055 ********* 2026-03-17 01:08:41.019448 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:08:41.019453 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:08:41.019459 | orchestrator | } 2026-03-17 01:08:41.019464 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:08:41.019470 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:08:41.019476 | orchestrator | } 2026-03-17 01:08:41.019481 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:08:41.019487 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:08:41.019493 | orchestrator | } 2026-03-17 01:08:41.019498 | orchestrator | 2026-03-17 01:08:41.019503 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:08:41.019508 | orchestrator | Tuesday 17 March 2026 01:08:13 +0000 (0:00:00.279) 0:00:46.335 ********* 2026-03-17 01:08:41.019518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019524 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:41.019532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019538 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:41.019544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-17 01:08:41.019553 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:41.019558 | orchestrator | 2026-03-17 01:08:41.019564 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-17 01:08:41.019569 | orchestrator | Tuesday 17 March 2026 01:08:14 +0000 (0:00:00.650) 0:00:46.985 ********* 2026-03-17 01:08:41.019574 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:41.019580 | orchestrator | 2026-03-17 01:08:41.019584 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-17 01:08:41.019590 | orchestrator | Tuesday 17 March 2026 01:08:16 +0000 (0:00:01.996) 0:00:48.982 ********* 2026-03-17 01:08:41.019596 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:41.019601 | orchestrator | 2026-03-17 01:08:41.019606 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-17 01:08:41.019612 | orchestrator | Tuesday 17 March 2026 01:08:17 +0000 (0:00:01.905) 0:00:50.887 ********* 2026-03-17 01:08:41.019621 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:41.019627 | orchestrator | 2026-03-17 01:08:41.019631 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-17 01:08:41.019636 | orchestrator | Tuesday 17 March 2026 01:08:30 +0000 (0:00:12.038) 0:01:02.925 ********* 2026-03-17 01:08:41.019641 | orchestrator | 2026-03-17 01:08:41.019646 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-17 01:08:41.019651 | orchestrator | Tuesday 17 March 2026 01:08:30 +0000 (0:00:00.072) 0:01:02.997 ********* 2026-03-17 01:08:41.019656 | orchestrator | 2026-03-17 01:08:41.019661 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-17 01:08:41.019667 | orchestrator | Tuesday 17 March 2026 01:08:30 +0000 (0:00:00.066) 0:01:03.064 ********* 2026-03-17 01:08:41.019672 | orchestrator | 2026-03-17 01:08:41.019678 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-17 01:08:41.019683 | orchestrator | Tuesday 17 March 2026 01:08:30 +0000 (0:00:00.067) 0:01:03.131 ********* 2026-03-17 01:08:41.019689 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:41.019695 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:41.019703 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:41.019710 | orchestrator | 2026-03-17 01:08:41.019719 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:08:41.019725 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 01:08:41.019731 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:08:41.019736 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:08:41.019741 | orchestrator | 2026-03-17 01:08:41.019745 | orchestrator | 2026-03-17 01:08:41.019750 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:08:41.019755 | orchestrator | Tuesday 17 March 2026 01:08:40 +0000 (0:00:10.344) 0:01:13.476 ********* 2026-03-17 01:08:41.019759 | orchestrator | =============================================================================== 2026-03-17 01:08:41.019764 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.04s 2026-03-17 01:08:41.019772 | orchestrator | placement : Restart placement-api container ---------------------------- 10.34s 2026-03-17 01:08:41.019781 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 6.26s 2026-03-17 01:08:41.019787 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 3.98s 2026-03-17 01:08:41.019792 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 3.89s 2026-03-17 01:08:41.019797 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.72s 2026-03-17 01:08:41.019802 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.61s 2026-03-17 01:08:41.019807 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.03s 2026-03-17 01:08:41.019812 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.87s 2026-03-17 01:08:41.019817 | orchestrator | placement : Creating placement databases -------------------------------- 2.00s 2026-03-17 01:08:41.019821 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.98s 2026-03-17 01:08:41.019827 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 1.97s 2026-03-17 01:08:41.019832 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.94s 2026-03-17 01:08:41.019837 | orchestrator | placement : Copying over config.json files for services ----------------- 1.91s 2026-03-17 01:08:41.019841 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.91s 2026-03-17 01:08:41.019846 | orchestrator | placement : Copying over existing policy file --------------------------- 1.65s 2026-03-17 01:08:41.019851 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.44s 2026-03-17 01:08:41.019855 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.21s 2026-03-17 01:08:41.019860 | orchestrator | Configure uWSGI for Placement ------------------------------------------- 0.98s 2026-03-17 01:08:41.019865 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.84s 2026-03-17 01:08:41.019870 | orchestrator | 2026-03-17 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:44.061411 | orchestrator | 2026-03-17 01:08:44 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:44.062904 | orchestrator | 2026-03-17 01:08:44 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:44.065661 | orchestrator | 2026-03-17 01:08:44 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:44.065947 | orchestrator | 2026-03-17 01:08:44 | INFO  | Task 4f75f0b9-36bb-4828-b8f5-ebea0ccdc1bf is in state STARTED 2026-03-17 01:08:44.066057 | orchestrator | 2026-03-17 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:47.098475 | orchestrator | 2026-03-17 01:08:47 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state STARTED 2026-03-17 01:08:47.099036 | orchestrator | 2026-03-17 01:08:47 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:47.100774 | orchestrator | 2026-03-17 01:08:47 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:47.101909 | orchestrator | 2026-03-17 01:08:47 | INFO  | Task 4f75f0b9-36bb-4828-b8f5-ebea0ccdc1bf is in state SUCCESS 2026-03-17 01:08:47.102210 | orchestrator | 2026-03-17 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:50.137029 | orchestrator | 2026-03-17 01:08:50 | INFO  | Task eee343bc-7dd5-4f85-8988-af966f52ffa3 is in state SUCCESS 2026-03-17 01:08:50.139015 | orchestrator | 2026-03-17 01:08:50.139074 | orchestrator | 2026-03-17 01:08:50.139080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:08:50.139084 | orchestrator | 2026-03-17 01:08:50.139088 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:08:50.139092 | orchestrator | Tuesday 17 March 2026 01:08:44 +0000 (0:00:00.184) 0:00:00.184 ********* 2026-03-17 01:08:50.139106 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:08:50.139111 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:08:50.139114 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:08:50.139117 | orchestrator | 2026-03-17 01:08:50.139120 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:08:50.139123 | orchestrator | Tuesday 17 March 2026 01:08:44 +0000 (0:00:00.338) 0:00:00.523 ********* 2026-03-17 01:08:50.139127 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-17 01:08:50.139130 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-17 01:08:50.139134 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-17 01:08:50.139137 | orchestrator | 2026-03-17 01:08:50.139140 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-17 01:08:50.139143 | orchestrator | 2026-03-17 01:08:50.139146 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-17 01:08:50.139150 | orchestrator | Tuesday 17 March 2026 01:08:44 +0000 (0:00:00.470) 0:00:00.993 ********* 2026-03-17 01:08:50.139153 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:08:50.139156 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:08:50.139159 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:08:50.139162 | orchestrator | 2026-03-17 01:08:50.139165 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:08:50.139175 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:08:50.139180 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:08:50.139183 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:08:50.139186 | orchestrator | 2026-03-17 01:08:50.139189 | orchestrator | 2026-03-17 01:08:50.139192 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:08:50.139196 | orchestrator | Tuesday 17 March 2026 01:08:45 +0000 (0:00:00.952) 0:00:01.946 ********* 2026-03-17 01:08:50.139199 | orchestrator | =============================================================================== 2026-03-17 01:08:50.139202 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.95s 2026-03-17 01:08:50.139205 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-03-17 01:08:50.139208 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-17 01:08:50.139211 | orchestrator | 2026-03-17 01:08:50.139214 | orchestrator | 2026-03-17 01:08:50.139218 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:08:50.139221 | orchestrator | 2026-03-17 01:08:50.139260 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:08:50.139269 | orchestrator | Tuesday 17 March 2026 01:06:02 +0000 (0:00:00.391) 0:00:00.391 ********* 2026-03-17 01:08:50.139274 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:08:50.139279 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:08:50.139440 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:08:50.139450 | orchestrator | 2026-03-17 01:08:50.139453 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:08:50.139456 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.483) 0:00:00.875 ********* 2026-03-17 01:08:50.139460 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-17 01:08:50.139464 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-17 01:08:50.139467 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-17 01:08:50.139470 | orchestrator | 2026-03-17 01:08:50.139473 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-17 01:08:50.139476 | orchestrator | 2026-03-17 01:08:50.139479 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:08:50.139483 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.338) 0:00:01.214 ********* 2026-03-17 01:08:50.139492 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:50.139496 | orchestrator | 2026-03-17 01:08:50.139499 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-03-17 01:08:50.139502 | orchestrator | Tuesday 17 March 2026 01:06:04 +0000 (0:00:00.519) 0:00:01.733 ********* 2026-03-17 01:08:50.139505 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-17 01:08:50.139508 | orchestrator | 2026-03-17 01:08:50.139511 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-03-17 01:08:50.139657 | orchestrator | Tuesday 17 March 2026 01:06:07 +0000 (0:00:03.764) 0:00:05.497 ********* 2026-03-17 01:08:50.139663 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-17 01:08:50.139667 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-17 01:08:50.139670 | orchestrator | 2026-03-17 01:08:50.139673 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-17 01:08:50.139677 | orchestrator | Tuesday 17 March 2026 01:06:15 +0000 (0:00:07.230) 0:00:12.727 ********* 2026-03-17 01:08:50.139680 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:08:50.139683 | orchestrator | 2026-03-17 01:08:50.139687 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-17 01:08:50.139690 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:03.125) 0:00:15.853 ********* 2026-03-17 01:08:50.139701 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-17 01:08:50.139705 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:08:50.139708 | orchestrator | 2026-03-17 01:08:50.139711 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-17 01:08:50.139714 | orchestrator | Tuesday 17 March 2026 01:06:22 +0000 (0:00:04.133) 0:00:19.986 ********* 2026-03-17 01:08:50.139717 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:08:50.139721 | orchestrator | 2026-03-17 01:08:50.139724 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-03-17 01:08:50.139727 | orchestrator | Tuesday 17 March 2026 01:06:25 +0000 (0:00:03.114) 0:00:23.101 ********* 2026-03-17 01:08:50.139730 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-17 01:08:50.139733 | orchestrator | 2026-03-17 01:08:50.139736 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-17 01:08:50.139739 | orchestrator | Tuesday 17 March 2026 01:06:29 +0000 (0:00:03.509) 0:00:26.610 ********* 2026-03-17 01:08:50.139749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.139755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.139772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.139776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.139884 | orchestrator | 2026-03-17 01:08:50.139936 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-17 01:08:50.139988 | orchestrator | Tuesday 17 March 2026 01:06:32 +0000 (0:00:03.102) 0:00:29.712 ********* 2026-03-17 01:08:50.140185 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:50.140194 | orchestrator | 2026-03-17 01:08:50.140200 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-17 01:08:50.140205 | orchestrator | Tuesday 17 March 2026 01:06:32 +0000 (0:00:00.120) 0:00:29.833 ********* 2026-03-17 01:08:50.140210 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:50.140215 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:50.140267 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:50.140275 | orchestrator | 2026-03-17 01:08:50.140278 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:08:50.140302 | orchestrator | Tuesday 17 March 2026 01:06:32 +0000 (0:00:00.270) 0:00:30.103 ********* 2026-03-17 01:08:50.140306 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:08:50.140309 | orchestrator | 2026-03-17 01:08:50.140343 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-17 01:08:50.140348 | orchestrator | Tuesday 17 March 2026 01:06:32 +0000 (0:00:00.466) 0:00:30.570 ********* 2026-03-17 01:08:50.140352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.140357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.140372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.140376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.140498 | orchestrator | 2026-03-17 01:08:50.140502 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-17 01:08:50.140505 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:05.453) 0:00:36.024 ********* 2026-03-17 01:08:50.140509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.140512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.140515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.140528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.140561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.140565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.140676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140681 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:50.140684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140795 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:50.140798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140808 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:50.140811 | orchestrator | 2026-03-17 01:08:50.140815 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-17 01:08:50.140818 | orchestrator | Tuesday 17 March 2026 01:06:41 +0000 (0:00:02.719) 0:00:38.743 ********* 2026-03-17 01:08:50.140821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.140824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.140840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.140849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.140861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.140872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.140903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140928 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:50.140931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140954 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:50.140957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.140960 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:50.140966 | orchestrator | 2026-03-17 01:08:50.140971 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-17 01:08:50.140977 | orchestrator | Tuesday 17 March 2026 01:06:43 +0000 (0:00:02.004) 0:00:40.748 ********* 2026-03-17 01:08:50.140982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.140988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.141013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.141022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141150 | orchestrator | 2026-03-17 01:08:50.141155 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-17 01:08:50.141162 | orchestrator | Tuesday 17 March 2026 01:06:49 +0000 (0:00:06.603) 0:00:47.353 ********* 2026-03-17 01:08:50.141166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.141174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.141190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.141195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141278 | orchestrator | 2026-03-17 01:08:50.141281 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-17 01:08:50.141354 | orchestrator | Tuesday 17 March 2026 01:07:10 +0000 (0:00:20.393) 0:01:07.747 ********* 2026-03-17 01:08:50.141358 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-17 01:08:50.141361 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-17 01:08:50.141365 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-17 01:08:50.141368 | orchestrator | 2026-03-17 01:08:50.141371 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-17 01:08:50.141374 | orchestrator | Tuesday 17 March 2026 01:07:15 +0000 (0:00:05.003) 0:01:12.750 ********* 2026-03-17 01:08:50.141377 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-17 01:08:50.141380 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-17 01:08:50.141383 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-17 01:08:50.141387 | orchestrator | 2026-03-17 01:08:50.141390 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-17 01:08:50.141393 | orchestrator | Tuesday 17 March 2026 01:07:18 +0000 (0:00:03.109) 0:01:15.860 ********* 2026-03-17 01:08:50.141399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141493 | orchestrator | 2026-03-17 01:08:50.141497 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-17 01:08:50.141500 | orchestrator | Tuesday 17 March 2026 01:07:21 +0000 (0:00:03.301) 0:01:19.162 ********* 2026-03-17 01:08:50.141505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141582 | orchestrator | 2026-03-17 01:08:50.141585 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:08:50.141588 | orchestrator | Tuesday 17 March 2026 01:07:24 +0000 (0:00:02.514) 0:01:21.676 ********* 2026-03-17 01:08:50.141592 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:50.141595 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:50.141598 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:50.141601 | orchestrator | 2026-03-17 01:08:50.141605 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-17 01:08:50.141608 | orchestrator | Tuesday 17 March 2026 01:07:24 +0000 (0:00:00.195) 0:01:21.871 ********* 2026-03-17 01:08:50.141614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.141624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.141647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141662 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:50.141665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141669 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:50.141675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.141687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141700 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:50.141703 | orchestrator | 2026-03-17 01:08:50.141707 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-03-17 01:08:50.141712 | orchestrator | Tuesday 17 March 2026 01:07:25 +0000 (0:00:00.898) 0:01:22.769 ********* 2026-03-17 01:08:50.141717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.141723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.141726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:08:50.141729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:08:50.141794 | orchestrator | 2026-03-17 01:08:50.141798 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-03-17 01:08:50.141801 | orchestrator | Tuesday 17 March 2026 01:07:30 +0000 (0:00:05.117) 0:01:27.887 ********* 2026-03-17 01:08:50.141804 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:08:50.141807 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:08:50.141810 | orchestrator | } 2026-03-17 01:08:50.141814 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:08:50.141817 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:08:50.141820 | orchestrator | } 2026-03-17 01:08:50.141823 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:08:50.141826 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:08:50.141831 | orchestrator | } 2026-03-17 01:08:50.141834 | orchestrator | 2026-03-17 01:08:50.141838 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:08:50.141841 | orchestrator | Tuesday 17 March 2026 01:07:30 +0000 (0:00:00.414) 0:01:28.301 ********* 2026-03-17 01:08:50.141844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.141853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.141862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141896 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:50.141899 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:50.141902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:08:50.141908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-17 01:08:50.141914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:08:50.141940 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:50.141945 | orchestrator | 2026-03-17 01:08:50.141951 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-17 01:08:50.141956 | orchestrator | Tuesday 17 March 2026 01:07:32 +0000 (0:00:02.063) 0:01:30.365 ********* 2026-03-17 01:08:50.141961 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:08:50.141966 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:08:50.141972 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:08:50.141977 | orchestrator | 2026-03-17 01:08:50.141983 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-17 01:08:50.141986 | orchestrator | Tuesday 17 March 2026 01:07:33 +0000 (0:00:00.235) 0:01:30.601 ********* 2026-03-17 01:08:50.141989 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-17 01:08:50.141992 | orchestrator | 2026-03-17 01:08:50.141995 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-17 01:08:50.141999 | orchestrator | Tuesday 17 March 2026 01:07:35 +0000 (0:00:02.613) 0:01:33.215 ********* 2026-03-17 01:08:50.142003 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:08:50.142007 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-17 01:08:50.142010 | orchestrator | 2026-03-17 01:08:50.142051 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-17 01:08:50.142055 | orchestrator | Tuesday 17 March 2026 01:07:38 +0000 (0:00:02.726) 0:01:35.942 ********* 2026-03-17 01:08:50.142059 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:50.142063 | orchestrator | 2026-03-17 01:08:50.142069 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-17 01:08:50.142072 | orchestrator | Tuesday 17 March 2026 01:07:52 +0000 (0:00:14.361) 0:01:50.303 ********* 2026-03-17 01:08:50.142080 | orchestrator | 2026-03-17 01:08:50.142083 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-17 01:08:50.142087 | orchestrator | Tuesday 17 March 2026 01:07:52 +0000 (0:00:00.123) 0:01:50.427 ********* 2026-03-17 01:08:50.142090 | orchestrator | 2026-03-17 01:08:50.142094 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-17 01:08:50.142097 | orchestrator | Tuesday 17 March 2026 01:07:52 +0000 (0:00:00.118) 0:01:50.545 ********* 2026-03-17 01:08:50.142101 | orchestrator | 2026-03-17 01:08:50.142104 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-17 01:08:50.142108 | orchestrator | Tuesday 17 March 2026 01:07:53 +0000 (0:00:00.129) 0:01:50.674 ********* 2026-03-17 01:08:50.142112 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:50.142115 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:50.142119 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:50.142123 | orchestrator | 2026-03-17 01:08:50.142126 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-17 01:08:50.142130 | orchestrator | Tuesday 17 March 2026 01:08:01 +0000 (0:00:08.636) 0:01:59.311 ********* 2026-03-17 01:08:50.142133 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:50.142137 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:50.142140 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:50.142144 | orchestrator | 2026-03-17 01:08:50.142148 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-17 01:08:50.142151 | orchestrator | Tuesday 17 March 2026 01:08:08 +0000 (0:00:06.718) 0:02:06.030 ********* 2026-03-17 01:08:50.142155 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:50.142159 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:50.142162 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:50.142166 | orchestrator | 2026-03-17 01:08:50.142169 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-17 01:08:50.142173 | orchestrator | Tuesday 17 March 2026 01:08:19 +0000 (0:00:11.350) 0:02:17.380 ********* 2026-03-17 01:08:50.142177 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:50.142180 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:50.142184 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:50.142187 | orchestrator | 2026-03-17 01:08:50.142192 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-17 01:08:50.142198 | orchestrator | Tuesday 17 March 2026 01:08:24 +0000 (0:00:04.867) 0:02:22.248 ********* 2026-03-17 01:08:50.142204 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:50.142209 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:50.142216 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:50.142223 | orchestrator | 2026-03-17 01:08:50.142229 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-17 01:08:50.142235 | orchestrator | Tuesday 17 March 2026 01:08:32 +0000 (0:00:08.291) 0:02:30.539 ********* 2026-03-17 01:08:50.142241 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:08:50.142247 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:08:50.142253 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:50.142258 | orchestrator | 2026-03-17 01:08:50.142263 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-17 01:08:50.142267 | orchestrator | Tuesday 17 March 2026 01:08:41 +0000 (0:00:08.813) 0:02:39.353 ********* 2026-03-17 01:08:50.142271 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:08:50.142274 | orchestrator | 2026-03-17 01:08:50.142278 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:08:50.142291 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 01:08:50.142297 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:08:50.142308 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:08:50.142319 | orchestrator | 2026-03-17 01:08:50.142324 | orchestrator | 2026-03-17 01:08:50.142329 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:08:50.142334 | orchestrator | Tuesday 17 March 2026 01:08:49 +0000 (0:00:07.587) 0:02:46.940 ********* 2026-03-17 01:08:50.142339 | orchestrator | =============================================================================== 2026-03-17 01:08:50.142343 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.39s 2026-03-17 01:08:50.142347 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.36s 2026-03-17 01:08:50.142351 | orchestrator | designate : Restart designate-central container ------------------------ 11.35s 2026-03-17 01:08:50.142355 | orchestrator | designate : Restart designate-worker container -------------------------- 8.81s 2026-03-17 01:08:50.142358 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.64s 2026-03-17 01:08:50.142362 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.29s 2026-03-17 01:08:50.142365 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.59s 2026-03-17 01:08:50.142369 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 7.23s 2026-03-17 01:08:50.142373 | orchestrator | designate : Restart designate-api container ----------------------------- 6.72s 2026-03-17 01:08:50.142376 | orchestrator | designate : Copying over config.json files for services ----------------- 6.60s 2026-03-17 01:08:50.142380 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.45s 2026-03-17 01:08:50.142384 | orchestrator | service-check-containers : designate | Check containers ----------------- 5.12s 2026-03-17 01:08:50.142390 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.00s 2026-03-17 01:08:50.142394 | orchestrator | designate : Restart designate-producer container ------------------------ 4.87s 2026-03-17 01:08:50.142398 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.13s 2026-03-17 01:08:50.142403 | orchestrator | service-ks-register : designate | Creating/deleting services ------------ 3.76s 2026-03-17 01:08:50.142409 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 3.51s 2026-03-17 01:08:50.142414 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.30s 2026-03-17 01:08:50.142420 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.13s 2026-03-17 01:08:50.142429 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.11s 2026-03-17 01:08:50.142437 | orchestrator | 2026-03-17 01:08:50 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:08:50.142445 | orchestrator | 2026-03-17 01:08:50 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:50.143109 | orchestrator | 2026-03-17 01:08:50 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:50.143140 | orchestrator | 2026-03-17 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:53.175864 | orchestrator | 2026-03-17 01:08:53 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:08:53.178115 | orchestrator | 2026-03-17 01:08:53 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:53.178952 | orchestrator | 2026-03-17 01:08:53 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:08:53.180333 | orchestrator | 2026-03-17 01:08:53 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:53.180646 | orchestrator | 2026-03-17 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:56.231818 | orchestrator | 2026-03-17 01:08:56 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:08:56.233570 | orchestrator | 2026-03-17 01:08:56 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:56.234622 | orchestrator | 2026-03-17 01:08:56 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:08:56.235993 | orchestrator | 2026-03-17 01:08:56 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:56.236035 | orchestrator | 2026-03-17 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:08:59.276698 | orchestrator | 2026-03-17 01:08:59 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:08:59.278380 | orchestrator | 2026-03-17 01:08:59 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:08:59.280619 | orchestrator | 2026-03-17 01:08:59 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:08:59.280725 | orchestrator | 2026-03-17 01:08:59 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:08:59.280845 | orchestrator | 2026-03-17 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:02.540831 | orchestrator | 2026-03-17 01:09:02 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:02.540919 | orchestrator | 2026-03-17 01:09:02 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:02.540937 | orchestrator | 2026-03-17 01:09:02 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:09:02.540951 | orchestrator | 2026-03-17 01:09:02 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:02.540963 | orchestrator | 2026-03-17 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:05.681785 | orchestrator | 2026-03-17 01:09:05 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:05.682293 | orchestrator | 2026-03-17 01:09:05 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:05.682994 | orchestrator | 2026-03-17 01:09:05 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:09:05.683877 | orchestrator | 2026-03-17 01:09:05 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:05.683915 | orchestrator | 2026-03-17 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:08.704118 | orchestrator | 2026-03-17 01:09:08 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:08.704245 | orchestrator | 2026-03-17 01:09:08 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:08.704798 | orchestrator | 2026-03-17 01:09:08 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:09:08.705341 | orchestrator | 2026-03-17 01:09:08 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:08.705378 | orchestrator | 2026-03-17 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:11.738082 | orchestrator | 2026-03-17 01:09:11 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:11.738161 | orchestrator | 2026-03-17 01:09:11 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:11.738171 | orchestrator | 2026-03-17 01:09:11 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:09:11.738178 | orchestrator | 2026-03-17 01:09:11 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:11.738211 | orchestrator | 2026-03-17 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:14.803540 | orchestrator | 2026-03-17 01:09:14 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:14.804562 | orchestrator | 2026-03-17 01:09:14 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:14.805603 | orchestrator | 2026-03-17 01:09:14 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:09:14.806672 | orchestrator | 2026-03-17 01:09:14 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:14.806701 | orchestrator | 2026-03-17 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:17.847727 | orchestrator | 2026-03-17 01:09:17 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:17.850001 | orchestrator | 2026-03-17 01:09:17 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:17.852979 | orchestrator | 2026-03-17 01:09:17 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:09:17.854852 | orchestrator | 2026-03-17 01:09:17 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:17.854902 | orchestrator | 2026-03-17 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:20.899792 | orchestrator | 2026-03-17 01:09:20 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:20.900342 | orchestrator | 2026-03-17 01:09:20 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:20.901941 | orchestrator | 2026-03-17 01:09:20 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:09:20.902913 | orchestrator | 2026-03-17 01:09:20 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:20.902951 | orchestrator | 2026-03-17 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:23.934494 | orchestrator | 2026-03-17 01:09:23 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:23.935995 | orchestrator | 2026-03-17 01:09:23 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:23.937421 | orchestrator | 2026-03-17 01:09:23 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state STARTED 2026-03-17 01:09:23.938978 | orchestrator | 2026-03-17 01:09:23 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:23.939018 | orchestrator | 2026-03-17 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:26.985976 | orchestrator | 2026-03-17 01:09:26 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:26.988502 | orchestrator | 2026-03-17 01:09:26 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:26.990405 | orchestrator | 2026-03-17 01:09:26 | INFO  | Task aec44ead-bb84-45d8-9476-4cf2eb9b5215 is in state SUCCESS 2026-03-17 01:09:26.992289 | orchestrator | 2026-03-17 01:09:26 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:26.994763 | orchestrator | 2026-03-17 01:09:26 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:26.994811 | orchestrator | 2026-03-17 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:30.035361 | orchestrator | 2026-03-17 01:09:30 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:30.035464 | orchestrator | 2026-03-17 01:09:30 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:30.036137 | orchestrator | 2026-03-17 01:09:30 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:30.039065 | orchestrator | 2026-03-17 01:09:30 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:30.039164 | orchestrator | 2026-03-17 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:33.063671 | orchestrator | 2026-03-17 01:09:33 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:33.064147 | orchestrator | 2026-03-17 01:09:33 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:33.064585 | orchestrator | 2026-03-17 01:09:33 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:33.065287 | orchestrator | 2026-03-17 01:09:33 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:33.065308 | orchestrator | 2026-03-17 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:36.090039 | orchestrator | 2026-03-17 01:09:36 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:36.090433 | orchestrator | 2026-03-17 01:09:36 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:36.090951 | orchestrator | 2026-03-17 01:09:36 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:36.091406 | orchestrator | 2026-03-17 01:09:36 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:36.091422 | orchestrator | 2026-03-17 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:39.136760 | orchestrator | 2026-03-17 01:09:39 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:39.137850 | orchestrator | 2026-03-17 01:09:39 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:39.138145 | orchestrator | 2026-03-17 01:09:39 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:39.138627 | orchestrator | 2026-03-17 01:09:39 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:39.138651 | orchestrator | 2026-03-17 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:42.173824 | orchestrator | 2026-03-17 01:09:42 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:42.173891 | orchestrator | 2026-03-17 01:09:42 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:42.176288 | orchestrator | 2026-03-17 01:09:42 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:42.177312 | orchestrator | 2026-03-17 01:09:42 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:42.177348 | orchestrator | 2026-03-17 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:45.215899 | orchestrator | 2026-03-17 01:09:45 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:45.216311 | orchestrator | 2026-03-17 01:09:45 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:45.216992 | orchestrator | 2026-03-17 01:09:45 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:45.220103 | orchestrator | 2026-03-17 01:09:45 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:45.220158 | orchestrator | 2026-03-17 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:48.254100 | orchestrator | 2026-03-17 01:09:48 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:48.254474 | orchestrator | 2026-03-17 01:09:48 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state STARTED 2026-03-17 01:09:48.256289 | orchestrator | 2026-03-17 01:09:48 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:48.257902 | orchestrator | 2026-03-17 01:09:48 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:48.257931 | orchestrator | 2026-03-17 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:51.314679 | orchestrator | 2026-03-17 01:09:51 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:09:51.321137 | orchestrator | 2026-03-17 01:09:51 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:51.323144 | orchestrator | 2026-03-17 01:09:51.323240 | orchestrator | 2026-03-17 01:09:51.323253 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:09:51.323260 | orchestrator | 2026-03-17 01:09:51.323263 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:09:51.323267 | orchestrator | Tuesday 17 March 2026 01:08:53 +0000 (0:00:00.731) 0:00:00.731 ********* 2026-03-17 01:09:51.323270 | orchestrator | ok: [testbed-manager] 2026-03-17 01:09:51.323274 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:09:51.323277 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:09:51.323281 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:09:51.323284 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:09:51.323287 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:09:51.323290 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:09:51.323293 | orchestrator | 2026-03-17 01:09:51.323297 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:09:51.323300 | orchestrator | Tuesday 17 March 2026 01:08:55 +0000 (0:00:01.639) 0:00:02.371 ********* 2026-03-17 01:09:51.323303 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-17 01:09:51.323307 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-17 01:09:51.323310 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-17 01:09:51.323313 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-17 01:09:51.323317 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-17 01:09:51.323320 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-17 01:09:51.323323 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-17 01:09:51.323327 | orchestrator | 2026-03-17 01:09:51.323330 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-17 01:09:51.323333 | orchestrator | 2026-03-17 01:09:51.323336 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-17 01:09:51.323339 | orchestrator | Tuesday 17 March 2026 01:08:56 +0000 (0:00:00.876) 0:00:03.248 ********* 2026-03-17 01:09:51.323343 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:09:51.323347 | orchestrator | 2026-03-17 01:09:51.323351 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-03-17 01:09:51.323354 | orchestrator | Tuesday 17 March 2026 01:08:57 +0000 (0:00:01.221) 0:00:04.469 ********* 2026-03-17 01:09:51.323359 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-17 01:09:51.323364 | orchestrator | 2026-03-17 01:09:51.323369 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-03-17 01:09:51.323375 | orchestrator | Tuesday 17 March 2026 01:09:01 +0000 (0:00:04.603) 0:00:09.073 ********* 2026-03-17 01:09:51.323421 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-17 01:09:51.323445 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-17 01:09:51.323452 | orchestrator | 2026-03-17 01:09:51.323455 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-17 01:09:51.323458 | orchestrator | Tuesday 17 March 2026 01:09:07 +0000 (0:00:05.947) 0:00:15.021 ********* 2026-03-17 01:09:51.323462 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-17 01:09:51.323465 | orchestrator | 2026-03-17 01:09:51.323468 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-17 01:09:51.323471 | orchestrator | Tuesday 17 March 2026 01:09:11 +0000 (0:00:03.520) 0:00:18.541 ********* 2026-03-17 01:09:51.323474 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-17 01:09:51.323480 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:09:51.323485 | orchestrator | 2026-03-17 01:09:51.323490 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-17 01:09:51.323495 | orchestrator | Tuesday 17 March 2026 01:09:14 +0000 (0:00:03.382) 0:00:21.924 ********* 2026-03-17 01:09:51.323500 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-17 01:09:51.323505 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-17 01:09:51.323510 | orchestrator | 2026-03-17 01:09:51.323515 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-03-17 01:09:51.323520 | orchestrator | Tuesday 17 March 2026 01:09:20 +0000 (0:00:06.212) 0:00:28.137 ********* 2026-03-17 01:09:51.323526 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-17 01:09:51.323531 | orchestrator | 2026-03-17 01:09:51.323536 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:09:51.323541 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:09:51.323548 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:09:51.323551 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:09:51.323554 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:09:51.323560 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:09:51.323582 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:09:51.323588 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:09:51.323593 | orchestrator | 2026-03-17 01:09:51.323599 | orchestrator | 2026-03-17 01:09:51.323612 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:09:51.323621 | orchestrator | Tuesday 17 March 2026 01:09:25 +0000 (0:00:04.378) 0:00:32.515 ********* 2026-03-17 01:09:51.323627 | orchestrator | =============================================================================== 2026-03-17 01:09:51.323632 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.21s 2026-03-17 01:09:51.323637 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 5.95s 2026-03-17 01:09:51.323642 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 4.60s 2026-03-17 01:09:51.323648 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 4.38s 2026-03-17 01:09:51.323651 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.52s 2026-03-17 01:09:51.323654 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.38s 2026-03-17 01:09:51.323663 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.64s 2026-03-17 01:09:51.323668 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.22s 2026-03-17 01:09:51.323673 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-03-17 01:09:51.323678 | orchestrator | 2026-03-17 01:09:51.323683 | orchestrator | 2026-03-17 01:09:51.323688 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:09:51.323694 | orchestrator | 2026-03-17 01:09:51.323699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:09:51.323704 | orchestrator | Tuesday 17 March 2026 01:07:58 +0000 (0:00:00.427) 0:00:00.427 ********* 2026-03-17 01:09:51.323709 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:09:51.323715 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:09:51.323720 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:09:51.323725 | orchestrator | 2026-03-17 01:09:51.323730 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:09:51.323736 | orchestrator | Tuesday 17 March 2026 01:07:58 +0000 (0:00:00.218) 0:00:00.646 ********* 2026-03-17 01:09:51.323742 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-17 01:09:51.323747 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-17 01:09:51.323751 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-17 01:09:51.323757 | orchestrator | 2026-03-17 01:09:51.323978 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-17 01:09:51.323986 | orchestrator | 2026-03-17 01:09:51.323992 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-17 01:09:51.323997 | orchestrator | Tuesday 17 March 2026 01:07:59 +0000 (0:00:00.251) 0:00:00.897 ********* 2026-03-17 01:09:51.324003 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:09:51.324008 | orchestrator | 2026-03-17 01:09:51.324013 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-03-17 01:09:51.324019 | orchestrator | Tuesday 17 March 2026 01:07:59 +0000 (0:00:00.573) 0:00:01.470 ********* 2026-03-17 01:09:51.324025 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-17 01:09:51.324030 | orchestrator | 2026-03-17 01:09:51.324035 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-03-17 01:09:51.324040 | orchestrator | Tuesday 17 March 2026 01:08:03 +0000 (0:00:03.943) 0:00:05.414 ********* 2026-03-17 01:09:51.324046 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-17 01:09:51.324051 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-17 01:09:51.324054 | orchestrator | 2026-03-17 01:09:51.324058 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-17 01:09:51.324062 | orchestrator | Tuesday 17 March 2026 01:08:09 +0000 (0:00:06.312) 0:00:11.726 ********* 2026-03-17 01:09:51.324065 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:09:51.324069 | orchestrator | 2026-03-17 01:09:51.324072 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-17 01:09:51.324076 | orchestrator | Tuesday 17 March 2026 01:08:12 +0000 (0:00:02.733) 0:00:14.460 ********* 2026-03-17 01:09:51.324079 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-17 01:09:51.324083 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:09:51.324087 | orchestrator | 2026-03-17 01:09:51.324091 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-17 01:09:51.324094 | orchestrator | Tuesday 17 March 2026 01:08:15 +0000 (0:00:03.303) 0:00:17.763 ********* 2026-03-17 01:09:51.324098 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:09:51.324101 | orchestrator | 2026-03-17 01:09:51.324104 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-03-17 01:09:51.324112 | orchestrator | Tuesday 17 March 2026 01:08:18 +0000 (0:00:02.802) 0:00:20.566 ********* 2026-03-17 01:09:51.324116 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-17 01:09:51.324120 | orchestrator | 2026-03-17 01:09:51.324123 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-17 01:09:51.324128 | orchestrator | Tuesday 17 March 2026 01:08:22 +0000 (0:00:03.277) 0:00:23.844 ********* 2026-03-17 01:09:51.324134 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:09:51.324139 | orchestrator | 2026-03-17 01:09:51.324148 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-17 01:09:51.324159 | orchestrator | Tuesday 17 March 2026 01:08:24 +0000 (0:00:02.882) 0:00:26.727 ********* 2026-03-17 01:09:51.324165 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:09:51.324170 | orchestrator | 2026-03-17 01:09:51.324176 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-17 01:09:51.324181 | orchestrator | Tuesday 17 March 2026 01:08:28 +0000 (0:00:03.538) 0:00:30.266 ********* 2026-03-17 01:09:51.324187 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:09:51.324207 | orchestrator | 2026-03-17 01:09:51.324212 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-17 01:09:51.324217 | orchestrator | Tuesday 17 March 2026 01:08:32 +0000 (0:00:03.627) 0:00:33.893 ********* 2026-03-17 01:09:51.324225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324280 | orchestrator | 2026-03-17 01:09:51.324286 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-17 01:09:51.324291 | orchestrator | Tuesday 17 March 2026 01:08:34 +0000 (0:00:01.988) 0:00:35.882 ********* 2026-03-17 01:09:51.324296 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:09:51.324301 | orchestrator | 2026-03-17 01:09:51.324306 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-17 01:09:51.324312 | orchestrator | Tuesday 17 March 2026 01:08:34 +0000 (0:00:00.129) 0:00:36.011 ********* 2026-03-17 01:09:51.324317 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:09:51.324322 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:09:51.324327 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:09:51.324333 | orchestrator | 2026-03-17 01:09:51.324338 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-17 01:09:51.324343 | orchestrator | Tuesday 17 March 2026 01:08:34 +0000 (0:00:00.298) 0:00:36.310 ********* 2026-03-17 01:09:51.324348 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:09:51.324354 | orchestrator | 2026-03-17 01:09:51.324360 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-17 01:09:51.324365 | orchestrator | Tuesday 17 March 2026 01:08:35 +0000 (0:00:00.861) 0:00:37.172 ********* 2026-03-17 01:09:51.324374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324419 | orchestrator | 2026-03-17 01:09:51.324424 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-17 01:09:51.324429 | orchestrator | Tuesday 17 March 2026 01:08:37 +0000 (0:00:02.375) 0:00:39.548 ********* 2026-03-17 01:09:51.324434 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:09:51.324439 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:09:51.324444 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:09:51.324450 | orchestrator | 2026-03-17 01:09:51.324455 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-17 01:09:51.324467 | orchestrator | Tuesday 17 March 2026 01:08:38 +0000 (0:00:00.429) 0:00:39.977 ********* 2026-03-17 01:09:51.324476 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:09:51.324481 | orchestrator | 2026-03-17 01:09:51.324486 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-17 01:09:51.324491 | orchestrator | Tuesday 17 March 2026 01:08:38 +0000 (0:00:00.500) 0:00:40.478 ********* 2026-03-17 01:09:51.324496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324539 | orchestrator | 2026-03-17 01:09:51.324544 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-17 01:09:51.324549 | orchestrator | Tuesday 17 March 2026 01:08:41 +0000 (0:00:02.399) 0:00:42.877 ********* 2026-03-17 01:09:51.324555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324570 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:09:51.324578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324597 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:09:51.324603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324608 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:09:51.324614 | orchestrator | 2026-03-17 01:09:51.324619 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-17 01:09:51.324624 | orchestrator | Tuesday 17 March 2026 01:08:42 +0000 (0:00:01.088) 0:00:43.966 ********* 2026-03-17 01:09:51.324630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-condu2026-03-17 01:09:51 | INFO  | Task b8b890c3-8ab7-49eb-9685-04b0da5e833c is in state SUCCESS 2026-03-17 01:09:51.324647 | orchestrator | ctor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324650 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:09:51.324653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324667 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:09:51.324675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324678 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:09:51.324682 | orchestrator | 2026-03-17 01:09:51.324685 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-17 01:09:51.324688 | orchestrator | Tuesday 17 March 2026 01:08:43 +0000 (0:00:00.906) 0:00:44.872 ********* 2026-03-17 01:09:51.324692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324725 | orchestrator | 2026-03-17 01:09:51.324729 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-17 01:09:51.324732 | orchestrator | Tuesday 17 March 2026 01:08:45 +0000 (0:00:02.364) 0:00:47.237 ********* 2026-03-17 01:09:51.324735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324763 | orchestrator | 2026-03-17 01:09:51.324766 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-17 01:09:51.324769 | orchestrator | Tuesday 17 March 2026 01:08:50 +0000 (0:00:04.906) 0:00:52.144 ********* 2026-03-17 01:09:51.324774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324786 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:09:51.324789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324796 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:09:51.324799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324863 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:09:51.324868 | orchestrator | 2026-03-17 01:09:51.324871 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-03-17 01:09:51.324875 | orchestrator | Tuesday 17 March 2026 01:08:51 +0000 (0:00:00.744) 0:00:52.888 ********* 2026-03-17 01:09:51.324878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:09:51.324897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:09:51.324911 | orchestrator | 2026-03-17 01:09:51.324914 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-03-17 01:09:51.324917 | orchestrator | Tuesday 17 March 2026 01:08:52 +0000 (0:00:01.893) 0:00:54.781 ********* 2026-03-17 01:09:51.324920 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:09:51.324924 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:09:51.324927 | orchestrator | } 2026-03-17 01:09:51.324930 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:09:51.324933 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:09:51.324937 | orchestrator | } 2026-03-17 01:09:51.324940 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:09:51.324943 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:09:51.324946 | orchestrator | } 2026-03-17 01:09:51.324949 | orchestrator | 2026-03-17 01:09:51.324953 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:09:51.324956 | orchestrator | Tuesday 17 March 2026 01:08:53 +0000 (0:00:00.641) 0:00:55.423 ********* 2026-03-17 01:09:51.324959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324969 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:09:51.324977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324985 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:09:51.324988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:09:51.324991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:09:51.324997 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:09:51.325000 | orchestrator | 2026-03-17 01:09:51.325004 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-17 01:09:51.325007 | orchestrator | Tuesday 17 March 2026 01:08:55 +0000 (0:00:02.189) 0:00:57.613 ********* 2026-03-17 01:09:51.325010 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:09:51.325013 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:09:51.325016 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:09:51.325019 | orchestrator | 2026-03-17 01:09:51.325022 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-17 01:09:51.325026 | orchestrator | Tuesday 17 March 2026 01:08:56 +0000 (0:00:00.274) 0:00:57.887 ********* 2026-03-17 01:09:51.325029 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:09:51.325032 | orchestrator | 2026-03-17 01:09:51.325036 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-17 01:09:51.325042 | orchestrator | Tuesday 17 March 2026 01:08:57 +0000 (0:00:01.802) 0:00:59.689 ********* 2026-03-17 01:09:51.325045 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:09:51.325048 | orchestrator | 2026-03-17 01:09:51.325052 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-17 01:09:51.325055 | orchestrator | Tuesday 17 March 2026 01:09:00 +0000 (0:00:02.182) 0:01:01.872 ********* 2026-03-17 01:09:51.325058 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:09:51.325061 | orchestrator | 2026-03-17 01:09:51.325064 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-17 01:09:51.325067 | orchestrator | Tuesday 17 March 2026 01:09:18 +0000 (0:00:18.021) 0:01:19.894 ********* 2026-03-17 01:09:51.325070 | orchestrator | 2026-03-17 01:09:51.325074 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-17 01:09:51.325077 | orchestrator | Tuesday 17 March 2026 01:09:18 +0000 (0:00:00.065) 0:01:19.959 ********* 2026-03-17 01:09:51.325080 | orchestrator | 2026-03-17 01:09:51.325083 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-17 01:09:51.325086 | orchestrator | Tuesday 17 March 2026 01:09:18 +0000 (0:00:00.064) 0:01:20.023 ********* 2026-03-17 01:09:51.325092 | orchestrator | 2026-03-17 01:09:51.325096 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-17 01:09:51.325099 | orchestrator | Tuesday 17 March 2026 01:09:18 +0000 (0:00:00.067) 0:01:20.091 ********* 2026-03-17 01:09:51.325103 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:09:51.325106 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:09:51.325109 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:09:51.325112 | orchestrator | 2026-03-17 01:09:51.325115 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-17 01:09:51.325118 | orchestrator | Tuesday 17 March 2026 01:09:33 +0000 (0:00:15.569) 0:01:35.661 ********* 2026-03-17 01:09:51.325122 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:09:51.325125 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:09:51.325128 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:09:51.325132 | orchestrator | 2026-03-17 01:09:51.325137 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:09:51.325142 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:09:51.325151 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:09:51.325156 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:09:51.325162 | orchestrator | 2026-03-17 01:09:51.325167 | orchestrator | 2026-03-17 01:09:51.325171 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:09:51.325179 | orchestrator | Tuesday 17 March 2026 01:09:49 +0000 (0:00:15.783) 0:01:51.445 ********* 2026-03-17 01:09:51.325185 | orchestrator | =============================================================================== 2026-03-17 01:09:51.325190 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.02s 2026-03-17 01:09:51.325209 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.78s 2026-03-17 01:09:51.325214 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.57s 2026-03-17 01:09:51.325219 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 6.31s 2026-03-17 01:09:51.325224 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.91s 2026-03-17 01:09:51.325228 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 3.94s 2026-03-17 01:09:51.325233 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.63s 2026-03-17 01:09:51.325238 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.54s 2026-03-17 01:09:51.325243 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.30s 2026-03-17 01:09:51.325248 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 3.28s 2026-03-17 01:09:51.325253 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.88s 2026-03-17 01:09:51.325258 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.80s 2026-03-17 01:09:51.325263 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.73s 2026-03-17 01:09:51.325268 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.40s 2026-03-17 01:09:51.325274 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.38s 2026-03-17 01:09:51.325278 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.36s 2026-03-17 01:09:51.325283 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.19s 2026-03-17 01:09:51.325288 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.18s 2026-03-17 01:09:51.325291 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.99s 2026-03-17 01:09:51.325295 | orchestrator | service-check-containers : magnum | Check containers -------------------- 1.89s 2026-03-17 01:09:51.325300 | orchestrator | 2026-03-17 01:09:51 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:51.331122 | orchestrator | 2026-03-17 01:09:51 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:51.331229 | orchestrator | 2026-03-17 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:54.367641 | orchestrator | 2026-03-17 01:09:54 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:09:54.368464 | orchestrator | 2026-03-17 01:09:54 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:54.369374 | orchestrator | 2026-03-17 01:09:54 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:54.370520 | orchestrator | 2026-03-17 01:09:54 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:54.370583 | orchestrator | 2026-03-17 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:09:57.398541 | orchestrator | 2026-03-17 01:09:57 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:09:57.399793 | orchestrator | 2026-03-17 01:09:57 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:09:57.400713 | orchestrator | 2026-03-17 01:09:57 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:09:57.402059 | orchestrator | 2026-03-17 01:09:57 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:09:57.402280 | orchestrator | 2026-03-17 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:00.426639 | orchestrator | 2026-03-17 01:10:00 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:00.427542 | orchestrator | 2026-03-17 01:10:00 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:00.428133 | orchestrator | 2026-03-17 01:10:00 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:00.428998 | orchestrator | 2026-03-17 01:10:00 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:10:00.429022 | orchestrator | 2026-03-17 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:03.478710 | orchestrator | 2026-03-17 01:10:03 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:03.479415 | orchestrator | 2026-03-17 01:10:03 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:03.480998 | orchestrator | 2026-03-17 01:10:03 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:03.481994 | orchestrator | 2026-03-17 01:10:03 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:10:03.482043 | orchestrator | 2026-03-17 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:06.551157 | orchestrator | 2026-03-17 01:10:06 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:06.552809 | orchestrator | 2026-03-17 01:10:06 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:06.552852 | orchestrator | 2026-03-17 01:10:06 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:06.553133 | orchestrator | 2026-03-17 01:10:06 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:10:06.553143 | orchestrator | 2026-03-17 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:09.614254 | orchestrator | 2026-03-17 01:10:09 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:09.618461 | orchestrator | 2026-03-17 01:10:09 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:09.620043 | orchestrator | 2026-03-17 01:10:09 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:09.620909 | orchestrator | 2026-03-17 01:10:09 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:10:09.621841 | orchestrator | 2026-03-17 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:12.651221 | orchestrator | 2026-03-17 01:10:12 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:12.651787 | orchestrator | 2026-03-17 01:10:12 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:12.653059 | orchestrator | 2026-03-17 01:10:12 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:12.653808 | orchestrator | 2026-03-17 01:10:12 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:10:12.653840 | orchestrator | 2026-03-17 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:15.690362 | orchestrator | 2026-03-17 01:10:15 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:15.692388 | orchestrator | 2026-03-17 01:10:15 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:15.694816 | orchestrator | 2026-03-17 01:10:15 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:15.696408 | orchestrator | 2026-03-17 01:10:15 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state STARTED 2026-03-17 01:10:15.696580 | orchestrator | 2026-03-17 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:18.731909 | orchestrator | 2026-03-17 01:10:18 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:18.732090 | orchestrator | 2026-03-17 01:10:18 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:18.733390 | orchestrator | 2026-03-17 01:10:18 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:18.734935 | orchestrator | 2026-03-17 01:10:18 | INFO  | Task 9e4c25ce-9a1d-49a4-b8ec-b31d8a91e286 is in state SUCCESS 2026-03-17 01:10:18.738288 | orchestrator | 2026-03-17 01:10:18.738332 | orchestrator | 2026-03-17 01:10:18.738338 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:10:18.738342 | orchestrator | 2026-03-17 01:10:18.738346 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:10:18.738349 | orchestrator | Tuesday 17 March 2026 01:06:02 +0000 (0:00:00.380) 0:00:00.380 ********* 2026-03-17 01:10:18.738353 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:18.738396 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:18.738403 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:18.738409 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:10:18.738414 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:10:18.738420 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:10:18.738426 | orchestrator | 2026-03-17 01:10:18.738429 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:10:18.738433 | orchestrator | Tuesday 17 March 2026 01:06:03 +0000 (0:00:00.526) 0:00:00.907 ********* 2026-03-17 01:10:18.738436 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-17 01:10:18.738440 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-17 01:10:18.738444 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-17 01:10:18.738447 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-17 01:10:18.738450 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-17 01:10:18.738505 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-17 01:10:18.738509 | orchestrator | 2026-03-17 01:10:18.738512 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-17 01:10:18.738515 | orchestrator | 2026-03-17 01:10:18.738518 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:10:18.738522 | orchestrator | Tuesday 17 March 2026 01:06:04 +0000 (0:00:00.935) 0:00:01.842 ********* 2026-03-17 01:10:18.738526 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:10:18.738529 | orchestrator | 2026-03-17 01:10:18.738533 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-17 01:10:18.738536 | orchestrator | Tuesday 17 March 2026 01:06:05 +0000 (0:00:01.153) 0:00:02.996 ********* 2026-03-17 01:10:18.738539 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:18.738543 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:18.738546 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:18.738549 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:10:18.738552 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:10:18.738556 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:10:18.738559 | orchestrator | 2026-03-17 01:10:18.738562 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-17 01:10:18.738566 | orchestrator | Tuesday 17 March 2026 01:06:07 +0000 (0:00:01.597) 0:00:04.593 ********* 2026-03-17 01:10:18.738588 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:18.738595 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:18.738600 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:18.738605 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:10:18.738610 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:10:18.738615 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:10:18.738782 | orchestrator | 2026-03-17 01:10:18.738796 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-17 01:10:18.738801 | orchestrator | Tuesday 17 March 2026 01:06:08 +0000 (0:00:01.087) 0:00:05.681 ********* 2026-03-17 01:10:18.738807 | orchestrator | ok: [testbed-node-0] => { 2026-03-17 01:10:18.738813 | orchestrator |  "changed": false, 2026-03-17 01:10:18.738818 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:10:18.738824 | orchestrator | } 2026-03-17 01:10:18.738829 | orchestrator | ok: [testbed-node-1] => { 2026-03-17 01:10:18.738835 | orchestrator |  "changed": false, 2026-03-17 01:10:18.738841 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:10:18.738845 | orchestrator | } 2026-03-17 01:10:18.738848 | orchestrator | ok: [testbed-node-2] => { 2026-03-17 01:10:18.738851 | orchestrator |  "changed": false, 2026-03-17 01:10:18.738855 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:10:18.738858 | orchestrator | } 2026-03-17 01:10:18.738861 | orchestrator | ok: [testbed-node-3] => { 2026-03-17 01:10:18.738864 | orchestrator |  "changed": false, 2026-03-17 01:10:18.738868 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:10:18.738871 | orchestrator | } 2026-03-17 01:10:18.738874 | orchestrator | ok: [testbed-node-4] => { 2026-03-17 01:10:18.738877 | orchestrator |  "changed": false, 2026-03-17 01:10:18.738881 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:10:18.738884 | orchestrator | } 2026-03-17 01:10:18.738887 | orchestrator | ok: [testbed-node-5] => { 2026-03-17 01:10:18.738890 | orchestrator |  "changed": false, 2026-03-17 01:10:18.738895 | orchestrator |  "msg": "All assertions passed" 2026-03-17 01:10:18.738900 | orchestrator | } 2026-03-17 01:10:18.738905 | orchestrator | 2026-03-17 01:10:18.738920 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-17 01:10:18.738928 | orchestrator | Tuesday 17 March 2026 01:06:08 +0000 (0:00:00.568) 0:00:06.249 ********* 2026-03-17 01:10:18.738933 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.738938 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.738942 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.738948 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.738953 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.738958 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.738964 | orchestrator | 2026-03-17 01:10:18.738968 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-03-17 01:10:18.738971 | orchestrator | Tuesday 17 March 2026 01:06:09 +0000 (0:00:00.606) 0:00:06.856 ********* 2026-03-17 01:10:18.738974 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-17 01:10:18.738978 | orchestrator | 2026-03-17 01:10:18.738981 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-03-17 01:10:18.738986 | orchestrator | Tuesday 17 March 2026 01:06:12 +0000 (0:00:03.103) 0:00:09.960 ********* 2026-03-17 01:10:18.738991 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-17 01:10:18.739000 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-17 01:10:18.739006 | orchestrator | 2026-03-17 01:10:18.739021 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-17 01:10:18.739100 | orchestrator | Tuesday 17 March 2026 01:06:18 +0000 (0:00:06.078) 0:00:16.038 ********* 2026-03-17 01:10:18.739291 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:10:18.739297 | orchestrator | 2026-03-17 01:10:18.739303 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-17 01:10:18.739317 | orchestrator | Tuesday 17 March 2026 01:06:21 +0000 (0:00:03.144) 0:00:19.183 ********* 2026-03-17 01:10:18.739323 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-17 01:10:18.739330 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:10:18.739336 | orchestrator | 2026-03-17 01:10:18.739342 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-17 01:10:18.739349 | orchestrator | Tuesday 17 March 2026 01:06:25 +0000 (0:00:03.745) 0:00:22.928 ********* 2026-03-17 01:10:18.739355 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:10:18.739361 | orchestrator | 2026-03-17 01:10:18.739367 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-03-17 01:10:18.739373 | orchestrator | Tuesday 17 March 2026 01:06:28 +0000 (0:00:02.907) 0:00:25.835 ********* 2026-03-17 01:10:18.739379 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-17 01:10:18.739385 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-17 01:10:18.739390 | orchestrator | 2026-03-17 01:10:18.739396 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:10:18.739401 | orchestrator | Tuesday 17 March 2026 01:06:35 +0000 (0:00:06.782) 0:00:32.618 ********* 2026-03-17 01:10:18.739406 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.739412 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.739418 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.739423 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.739429 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.739435 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.739440 | orchestrator | 2026-03-17 01:10:18.739446 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-17 01:10:18.739452 | orchestrator | Tuesday 17 March 2026 01:06:35 +0000 (0:00:00.481) 0:00:33.099 ********* 2026-03-17 01:10:18.739457 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.739462 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.739468 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.739473 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.739480 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.739486 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.739491 | orchestrator | 2026-03-17 01:10:18.739497 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-17 01:10:18.739503 | orchestrator | Tuesday 17 March 2026 01:06:37 +0000 (0:00:02.220) 0:00:35.320 ********* 2026-03-17 01:10:18.739509 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:18.739515 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:18.739521 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:18.739526 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:10:18.739532 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:10:18.739537 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:10:18.739543 | orchestrator | 2026-03-17 01:10:18.739548 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-17 01:10:18.739554 | orchestrator | Tuesday 17 March 2026 01:06:38 +0000 (0:00:00.817) 0:00:36.138 ********* 2026-03-17 01:10:18.739560 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.739566 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.739572 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.739578 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.739584 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.739589 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.739595 | orchestrator | 2026-03-17 01:10:18.739601 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-17 01:10:18.739607 | orchestrator | Tuesday 17 March 2026 01:06:42 +0000 (0:00:03.441) 0:00:39.579 ********* 2026-03-17 01:10:18.739624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.739661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.739669 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.739675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.739681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.739693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.739699 | orchestrator | 2026-03-17 01:10:18.739704 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-17 01:10:18.739710 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:03.048) 0:00:42.628 ********* 2026-03-17 01:10:18.739716 | orchestrator | [WARNING]: Skipped 2026-03-17 01:10:18.739723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-17 01:10:18.739742 | orchestrator | due to this access issue: 2026-03-17 01:10:18.739749 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-17 01:10:18.739754 | orchestrator | a directory 2026-03-17 01:10:18.739760 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:10:18.739765 | orchestrator | 2026-03-17 01:10:18.739770 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:10:18.739775 | orchestrator | Tuesday 17 March 2026 01:06:45 +0000 (0:00:00.837) 0:00:43.466 ********* 2026-03-17 01:10:18.739781 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:10:18.739788 | orchestrator | 2026-03-17 01:10:18.739793 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-17 01:10:18.739798 | orchestrator | Tuesday 17 March 2026 01:06:46 +0000 (0:00:00.985) 0:00:44.451 ********* 2026-03-17 01:10:18.739804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.739810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.739822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.739842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.739849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.739857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.739864 | orchestrator | 2026-03-17 01:10:18.739869 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-17 01:10:18.739874 | orchestrator | Tuesday 17 March 2026 01:06:50 +0000 (0:00:03.465) 0:00:47.917 ********* 2026-03-17 01:10:18.739883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.739888 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.739897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.739902 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.739924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.739930 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.739936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.739946 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.739952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.739957 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.739972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.739978 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.739983 | orchestrator | 2026-03-17 01:10:18.739988 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-17 01:10:18.739994 | orchestrator | Tuesday 17 March 2026 01:06:53 +0000 (0:00:02.965) 0:00:50.882 ********* 2026-03-17 01:10:18.740016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.740021 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.740032 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740040 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.740050 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740069 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740079 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740083 | orchestrator | 2026-03-17 01:10:18.740087 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-17 01:10:18.740090 | orchestrator | Tuesday 17 March 2026 01:06:56 +0000 (0:00:03.419) 0:00:54.302 ********* 2026-03-17 01:10:18.740094 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740100 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740104 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740108 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740111 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740115 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740119 | orchestrator | 2026-03-17 01:10:18.740122 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-17 01:10:18.740126 | orchestrator | Tuesday 17 March 2026 01:06:59 +0000 (0:00:02.789) 0:00:57.092 ********* 2026-03-17 01:10:18.740130 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740134 | orchestrator | 2026-03-17 01:10:18.740137 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-17 01:10:18.740141 | orchestrator | Tuesday 17 March 2026 01:06:59 +0000 (0:00:00.252) 0:00:57.345 ********* 2026-03-17 01:10:18.740145 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740149 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740170 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740176 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740181 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740186 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740192 | orchestrator | 2026-03-17 01:10:18.740197 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-17 01:10:18.740203 | orchestrator | Tuesday 17 March 2026 01:07:00 +0000 (0:00:00.795) 0:00:58.140 ********* 2026-03-17 01:10:18.740208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.740215 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.740246 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.740263 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740271 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740278 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740285 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740288 | orchestrator | 2026-03-17 01:10:18.740292 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-17 01:10:18.740295 | orchestrator | Tuesday 17 March 2026 01:07:03 +0000 (0:00:03.221) 0:01:01.361 ********* 2026-03-17 01:10:18.740312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.740350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.740359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.740367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.740373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.740387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.740393 | orchestrator | 2026-03-17 01:10:18.740398 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-17 01:10:18.740403 | orchestrator | Tuesday 17 March 2026 01:07:07 +0000 (0:00:03.590) 0:01:04.952 ********* 2026-03-17 01:10:18.740408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.740414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.740422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.740433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.740443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.740449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.740454 | orchestrator | 2026-03-17 01:10:18.740460 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-17 01:10:18.740465 | orchestrator | Tuesday 17 March 2026 01:07:13 +0000 (0:00:05.847) 0:01:10.800 ********* 2026-03-17 01:10:18.740474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.740486 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740502 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.740516 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.740523 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740530 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740543 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740548 | orchestrator | 2026-03-17 01:10:18.740556 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-17 01:10:18.740562 | orchestrator | Tuesday 17 March 2026 01:07:15 +0000 (0:00:02.190) 0:01:12.990 ********* 2026-03-17 01:10:18.740567 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740572 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740577 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740582 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:10:18.740587 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:18.740592 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:10:18.740597 | orchestrator | 2026-03-17 01:10:18.740603 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-17 01:10:18.740612 | orchestrator | Tuesday 17 March 2026 01:07:18 +0000 (0:00:02.890) 0:01:15.881 ********* 2026-03-17 01:10:18.740619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740624 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740634 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.740644 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.740664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.740670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.740675 | orchestrator | 2026-03-17 01:10:18.740680 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-17 01:10:18.740684 | orchestrator | Tuesday 17 March 2026 01:07:21 +0000 (0:00:03.552) 0:01:19.433 ********* 2026-03-17 01:10:18.740689 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740694 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740699 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740703 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740708 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740712 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740717 | orchestrator | 2026-03-17 01:10:18.740723 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-17 01:10:18.740727 | orchestrator | Tuesday 17 March 2026 01:07:24 +0000 (0:00:02.400) 0:01:21.834 ********* 2026-03-17 01:10:18.740732 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740736 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740741 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740746 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740753 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740759 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740768 | orchestrator | 2026-03-17 01:10:18.740773 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-17 01:10:18.740779 | orchestrator | Tuesday 17 March 2026 01:07:26 +0000 (0:00:02.489) 0:01:24.324 ********* 2026-03-17 01:10:18.740784 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740788 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740793 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740799 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740804 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740810 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740815 | orchestrator | 2026-03-17 01:10:18.740821 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-17 01:10:18.740826 | orchestrator | Tuesday 17 March 2026 01:07:28 +0000 (0:00:01.954) 0:01:26.278 ********* 2026-03-17 01:10:18.740831 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740837 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740842 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740847 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740853 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740858 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740864 | orchestrator | 2026-03-17 01:10:18.740869 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-17 01:10:18.740874 | orchestrator | Tuesday 17 March 2026 01:07:30 +0000 (0:00:01.878) 0:01:28.157 ********* 2026-03-17 01:10:18.740882 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740887 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740892 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740897 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740903 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740908 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740914 | orchestrator | 2026-03-17 01:10:18.740919 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-17 01:10:18.740924 | orchestrator | Tuesday 17 March 2026 01:07:33 +0000 (0:00:02.668) 0:01:30.826 ********* 2026-03-17 01:10:18.740929 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:10:18.740934 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.740940 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:10:18.740945 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.740951 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:10:18.740956 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.740961 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:10:18.740967 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.740973 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:10:18.740979 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.740989 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-17 01:10:18.740994 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.740999 | orchestrator | 2026-03-17 01:10:18.741010 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-17 01:10:18.741016 | orchestrator | Tuesday 17 March 2026 01:07:35 +0000 (0:00:01.962) 0:01:32.789 ********* 2026-03-17 01:10:18.741022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.741033 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.741044 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.741063 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.741077 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.741093 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.741103 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741108 | orchestrator | 2026-03-17 01:10:18.741113 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-17 01:10:18.741118 | orchestrator | Tuesday 17 March 2026 01:07:37 +0000 (0:00:01.927) 0:01:34.716 ********* 2026-03-17 01:10:18.741124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.741129 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.741142 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.741173 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.741184 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.741194 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.741207 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741213 | orchestrator | 2026-03-17 01:10:18.741218 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-17 01:10:18.741223 | orchestrator | Tuesday 17 March 2026 01:07:39 +0000 (0:00:01.882) 0:01:36.599 ********* 2026-03-17 01:10:18.741228 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741233 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741238 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741244 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741249 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741254 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741261 | orchestrator | 2026-03-17 01:10:18.741266 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-17 01:10:18.741271 | orchestrator | Tuesday 17 March 2026 01:07:40 +0000 (0:00:01.871) 0:01:38.470 ********* 2026-03-17 01:10:18.741280 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741285 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741290 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741296 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:10:18.741301 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:10:18.741306 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:10:18.741312 | orchestrator | 2026-03-17 01:10:18.741321 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-17 01:10:18.741326 | orchestrator | Tuesday 17 March 2026 01:07:44 +0000 (0:00:03.641) 0:01:42.112 ********* 2026-03-17 01:10:18.741332 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741337 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741343 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741349 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741355 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741361 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741366 | orchestrator | 2026-03-17 01:10:18.741373 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-17 01:10:18.741378 | orchestrator | Tuesday 17 March 2026 01:07:46 +0000 (0:00:02.067) 0:01:44.179 ********* 2026-03-17 01:10:18.741383 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741389 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741394 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741400 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741406 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741411 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741416 | orchestrator | 2026-03-17 01:10:18.741422 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-17 01:10:18.741427 | orchestrator | Tuesday 17 March 2026 01:07:49 +0000 (0:00:02.787) 0:01:46.966 ********* 2026-03-17 01:10:18.741433 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741439 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741444 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741450 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741455 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741461 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741466 | orchestrator | 2026-03-17 01:10:18.741471 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-17 01:10:18.741476 | orchestrator | Tuesday 17 March 2026 01:07:51 +0000 (0:00:01.669) 0:01:48.636 ********* 2026-03-17 01:10:18.741481 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741486 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741504 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741509 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741515 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741520 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741526 | orchestrator | 2026-03-17 01:10:18.741531 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-17 01:10:18.741537 | orchestrator | Tuesday 17 March 2026 01:07:53 +0000 (0:00:02.267) 0:01:50.904 ********* 2026-03-17 01:10:18.741542 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741548 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741553 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741559 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741564 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741569 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741575 | orchestrator | 2026-03-17 01:10:18.741581 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-17 01:10:18.741586 | orchestrator | Tuesday 17 March 2026 01:07:56 +0000 (0:00:02.804) 0:01:53.710 ********* 2026-03-17 01:10:18.741591 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741597 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741606 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741612 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741617 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741623 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741628 | orchestrator | 2026-03-17 01:10:18.741633 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-17 01:10:18.741639 | orchestrator | Tuesday 17 March 2026 01:07:58 +0000 (0:00:02.732) 0:01:56.442 ********* 2026-03-17 01:10:18.741645 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741651 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741655 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741661 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741665 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741671 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741677 | orchestrator | 2026-03-17 01:10:18.741682 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-17 01:10:18.741688 | orchestrator | Tuesday 17 March 2026 01:08:00 +0000 (0:00:01.914) 0:01:58.357 ********* 2026-03-17 01:10:18.741693 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:10:18.741700 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741708 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:10:18.741715 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741720 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:10:18.741725 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741731 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:10:18.741736 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741741 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:10:18.741746 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741751 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-17 01:10:18.741756 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741761 | orchestrator | 2026-03-17 01:10:18.741766 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-17 01:10:18.741773 | orchestrator | Tuesday 17 March 2026 01:08:03 +0000 (0:00:03.042) 0:02:01.399 ********* 2026-03-17 01:10:18.741787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.741794 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.741799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.741810 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.741816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.741822 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.741831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.741838 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.741848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.741853 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.741859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.741868 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.741873 | orchestrator | 2026-03-17 01:10:18.741879 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-03-17 01:10:18.741884 | orchestrator | Tuesday 17 March 2026 01:08:06 +0000 (0:00:02.191) 0:02:03.591 ********* 2026-03-17 01:10:18.741890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.741898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.741907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.741912 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.741921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:10:18.741928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-17 01:10:18.741933 | orchestrator | 2026-03-17 01:10:18.741938 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-03-17 01:10:18.741944 | orchestrator | Tuesday 17 March 2026 01:08:08 +0000 (0:00:02.857) 0:02:06.449 ********* 2026-03-17 01:10:18.741949 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:10:18.741955 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:10:18.741961 | orchestrator | } 2026-03-17 01:10:18.741966 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:10:18.741971 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:10:18.741976 | orchestrator | } 2026-03-17 01:10:18.741981 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:10:18.741987 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:10:18.741992 | orchestrator | } 2026-03-17 01:10:18.741998 | orchestrator | changed: [testbed-node-3] => { 2026-03-17 01:10:18.742004 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:10:18.742007 | orchestrator | } 2026-03-17 01:10:18.742010 | orchestrator | changed: [testbed-node-4] => { 2026-03-17 01:10:18.742051 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:10:18.742054 | orchestrator | } 2026-03-17 01:10:18.742057 | orchestrator | changed: [testbed-node-5] => { 2026-03-17 01:10:18.742061 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:10:18.742064 | orchestrator | } 2026-03-17 01:10:18.742067 | orchestrator | 2026-03-17 01:10:18.742070 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:10:18.742074 | orchestrator | Tuesday 17 March 2026 01:08:09 +0000 (0:00:00.676) 0:02:07.126 ********* 2026-03-17 01:10:18.742082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.742091 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.742094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.742098 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.742101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:10:18.742105 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.742115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.742121 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.742130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.742140 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.742179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-17 01:10:18.742186 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.742191 | orchestrator | 2026-03-17 01:10:18.742196 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-17 01:10:18.742201 | orchestrator | Tuesday 17 March 2026 01:08:12 +0000 (0:00:03.417) 0:02:10.543 ********* 2026-03-17 01:10:18.742206 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:18.742211 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:18.742216 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:18.742221 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:10:18.742227 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:10:18.742232 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:10:18.742237 | orchestrator | 2026-03-17 01:10:18.742242 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-17 01:10:18.742248 | orchestrator | Tuesday 17 March 2026 01:08:13 +0000 (0:00:00.612) 0:02:11.155 ********* 2026-03-17 01:10:18.742254 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:18.742259 | orchestrator | 2026-03-17 01:10:18.742264 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-17 01:10:18.742269 | orchestrator | Tuesday 17 March 2026 01:08:15 +0000 (0:00:01.825) 0:02:12.980 ********* 2026-03-17 01:10:18.742274 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:18.742280 | orchestrator | 2026-03-17 01:10:18.742286 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-17 01:10:18.742290 | orchestrator | Tuesday 17 March 2026 01:08:17 +0000 (0:00:01.916) 0:02:14.897 ********* 2026-03-17 01:10:18.742293 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:18.742296 | orchestrator | 2026-03-17 01:10:18.742299 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:10:18.742303 | orchestrator | Tuesday 17 March 2026 01:08:57 +0000 (0:00:40.353) 0:02:55.250 ********* 2026-03-17 01:10:18.742306 | orchestrator | 2026-03-17 01:10:18.742309 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:10:18.742312 | orchestrator | Tuesday 17 March 2026 01:08:57 +0000 (0:00:00.067) 0:02:55.318 ********* 2026-03-17 01:10:18.742316 | orchestrator | 2026-03-17 01:10:18.742319 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:10:18.742322 | orchestrator | Tuesday 17 March 2026 01:08:57 +0000 (0:00:00.068) 0:02:55.387 ********* 2026-03-17 01:10:18.742325 | orchestrator | 2026-03-17 01:10:18.742329 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:10:18.742332 | orchestrator | Tuesday 17 March 2026 01:08:57 +0000 (0:00:00.066) 0:02:55.454 ********* 2026-03-17 01:10:18.742335 | orchestrator | 2026-03-17 01:10:18.742339 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:10:18.742342 | orchestrator | Tuesday 17 March 2026 01:08:57 +0000 (0:00:00.064) 0:02:55.518 ********* 2026-03-17 01:10:18.742345 | orchestrator | 2026-03-17 01:10:18.742348 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-17 01:10:18.742355 | orchestrator | Tuesday 17 March 2026 01:08:58 +0000 (0:00:00.111) 0:02:55.629 ********* 2026-03-17 01:10:18.742359 | orchestrator | 2026-03-17 01:10:18.742362 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-17 01:10:18.742365 | orchestrator | Tuesday 17 March 2026 01:08:58 +0000 (0:00:00.082) 0:02:55.712 ********* 2026-03-17 01:10:18.742368 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:18.742372 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:10:18.742375 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:10:18.742378 | orchestrator | 2026-03-17 01:10:18.742384 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-17 01:10:18.742388 | orchestrator | Tuesday 17 March 2026 01:09:30 +0000 (0:00:31.962) 0:03:27.675 ********* 2026-03-17 01:10:18.742391 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:10:18.742394 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:10:18.742398 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:10:18.742401 | orchestrator | 2026-03-17 01:10:18.742404 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:10:18.742408 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:10:18.742412 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-17 01:10:18.742415 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-17 01:10:18.742419 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:10:18.742425 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:10:18.742429 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-17 01:10:18.742432 | orchestrator | 2026-03-17 01:10:18.742435 | orchestrator | 2026-03-17 01:10:18.742439 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:10:18.742442 | orchestrator | Tuesday 17 March 2026 01:10:16 +0000 (0:00:46.671) 0:04:14.347 ********* 2026-03-17 01:10:18.742445 | orchestrator | =============================================================================== 2026-03-17 01:10:18.742449 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 46.67s 2026-03-17 01:10:18.742452 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.35s 2026-03-17 01:10:18.742455 | orchestrator | neutron : Restart neutron-server container ----------------------------- 31.96s 2026-03-17 01:10:18.742459 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 6.78s 2026-03-17 01:10:18.742462 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 6.08s 2026-03-17 01:10:18.742465 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.85s 2026-03-17 01:10:18.742469 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.75s 2026-03-17 01:10:18.742472 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.64s 2026-03-17 01:10:18.742475 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.59s 2026-03-17 01:10:18.742479 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.55s 2026-03-17 01:10:18.742482 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.47s 2026-03-17 01:10:18.742485 | orchestrator | Setting sysctl values --------------------------------------------------- 3.44s 2026-03-17 01:10:18.742488 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.42s 2026-03-17 01:10:18.742494 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.42s 2026-03-17 01:10:18.742498 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.22s 2026-03-17 01:10:18.742501 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.14s 2026-03-17 01:10:18.742504 | orchestrator | service-ks-register : neutron | Creating/deleting services -------------- 3.10s 2026-03-17 01:10:18.742507 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.05s 2026-03-17 01:10:18.742511 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.04s 2026-03-17 01:10:18.742514 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 2.97s 2026-03-17 01:10:18.742518 | orchestrator | 2026-03-17 01:10:18 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:18.742521 | orchestrator | 2026-03-17 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:21.772764 | orchestrator | 2026-03-17 01:10:21 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:21.773197 | orchestrator | 2026-03-17 01:10:21 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:21.773648 | orchestrator | 2026-03-17 01:10:21 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:21.774387 | orchestrator | 2026-03-17 01:10:21 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:21.774425 | orchestrator | 2026-03-17 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:24.794343 | orchestrator | 2026-03-17 01:10:24 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:24.794945 | orchestrator | 2026-03-17 01:10:24 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:24.797015 | orchestrator | 2026-03-17 01:10:24 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:24.800254 | orchestrator | 2026-03-17 01:10:24 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:24.800309 | orchestrator | 2026-03-17 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:27.830240 | orchestrator | 2026-03-17 01:10:27 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:27.830574 | orchestrator | 2026-03-17 01:10:27 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:27.831231 | orchestrator | 2026-03-17 01:10:27 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:27.831910 | orchestrator | 2026-03-17 01:10:27 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:27.831944 | orchestrator | 2026-03-17 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:30.870304 | orchestrator | 2026-03-17 01:10:30 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:30.870564 | orchestrator | 2026-03-17 01:10:30 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:30.871408 | orchestrator | 2026-03-17 01:10:30 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:30.872540 | orchestrator | 2026-03-17 01:10:30 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:30.872568 | orchestrator | 2026-03-17 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:33.896602 | orchestrator | 2026-03-17 01:10:33 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:33.896865 | orchestrator | 2026-03-17 01:10:33 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:33.897662 | orchestrator | 2026-03-17 01:10:33 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:33.898757 | orchestrator | 2026-03-17 01:10:33 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:33.898793 | orchestrator | 2026-03-17 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:36.921945 | orchestrator | 2026-03-17 01:10:36 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:36.922465 | orchestrator | 2026-03-17 01:10:36 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:36.923236 | orchestrator | 2026-03-17 01:10:36 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:36.923956 | orchestrator | 2026-03-17 01:10:36 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:36.924779 | orchestrator | 2026-03-17 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:39.943826 | orchestrator | 2026-03-17 01:10:39 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:39.944166 | orchestrator | 2026-03-17 01:10:39 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:39.944674 | orchestrator | 2026-03-17 01:10:39 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:39.946536 | orchestrator | 2026-03-17 01:10:39 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:39.946600 | orchestrator | 2026-03-17 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:42.979387 | orchestrator | 2026-03-17 01:10:42 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:42.979476 | orchestrator | 2026-03-17 01:10:42 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:42.979482 | orchestrator | 2026-03-17 01:10:42 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:42.979487 | orchestrator | 2026-03-17 01:10:42 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:42.979492 | orchestrator | 2026-03-17 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:45.998135 | orchestrator | 2026-03-17 01:10:45 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:45.998650 | orchestrator | 2026-03-17 01:10:45 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:45.999253 | orchestrator | 2026-03-17 01:10:45 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state STARTED 2026-03-17 01:10:45.999893 | orchestrator | 2026-03-17 01:10:45 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:45.999923 | orchestrator | 2026-03-17 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:49.030264 | orchestrator | 2026-03-17 01:10:49 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:49.030575 | orchestrator | 2026-03-17 01:10:49 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:49.031550 | orchestrator | 2026-03-17 01:10:49 | INFO  | Task a91dc43d-f4d6-44ec-84da-6c9046b1dacf is in state SUCCESS 2026-03-17 01:10:49.033397 | orchestrator | 2026-03-17 01:10:49.033434 | orchestrator | 2026-03-17 01:10:49.033440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:10:49.033455 | orchestrator | 2026-03-17 01:10:49.033458 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:10:49.033463 | orchestrator | Tuesday 17 March 2026 01:09:28 +0000 (0:00:00.270) 0:00:00.270 ********* 2026-03-17 01:10:49.033478 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:49.033493 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:49.033496 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:49.033500 | orchestrator | 2026-03-17 01:10:49.033503 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:10:49.033506 | orchestrator | Tuesday 17 March 2026 01:09:28 +0000 (0:00:00.262) 0:00:00.532 ********* 2026-03-17 01:10:49.033509 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-17 01:10:49.033513 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-17 01:10:49.033516 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-17 01:10:49.033519 | orchestrator | 2026-03-17 01:10:49.033522 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-17 01:10:49.033527 | orchestrator | 2026-03-17 01:10:49.033532 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:10:49.033537 | orchestrator | Tuesday 17 March 2026 01:09:28 +0000 (0:00:00.255) 0:00:00.787 ********* 2026-03-17 01:10:49.033543 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:10:49.033551 | orchestrator | 2026-03-17 01:10:49.033555 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-03-17 01:10:49.033560 | orchestrator | Tuesday 17 March 2026 01:09:29 +0000 (0:00:00.559) 0:00:01.347 ********* 2026-03-17 01:10:49.033565 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-17 01:10:49.033570 | orchestrator | 2026-03-17 01:10:49.033575 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-03-17 01:10:49.033579 | orchestrator | Tuesday 17 March 2026 01:09:33 +0000 (0:00:04.409) 0:00:05.757 ********* 2026-03-17 01:10:49.033583 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-17 01:10:49.033588 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-17 01:10:49.033593 | orchestrator | 2026-03-17 01:10:49.033598 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-17 01:10:49.033603 | orchestrator | Tuesday 17 March 2026 01:09:39 +0000 (0:00:06.080) 0:00:11.837 ********* 2026-03-17 01:10:49.033609 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:10:49.033612 | orchestrator | 2026-03-17 01:10:49.033615 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-17 01:10:49.033619 | orchestrator | Tuesday 17 March 2026 01:09:43 +0000 (0:00:03.096) 0:00:14.933 ********* 2026-03-17 01:10:49.033622 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-17 01:10:49.033625 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:10:49.033628 | orchestrator | 2026-03-17 01:10:49.033632 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-17 01:10:49.033635 | orchestrator | Tuesday 17 March 2026 01:09:46 +0000 (0:00:03.471) 0:00:18.405 ********* 2026-03-17 01:10:49.033638 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:10:49.033641 | orchestrator | 2026-03-17 01:10:49.033644 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-03-17 01:10:49.033647 | orchestrator | Tuesday 17 March 2026 01:09:49 +0000 (0:00:02.608) 0:00:21.014 ********* 2026-03-17 01:10:49.033651 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-17 01:10:49.033654 | orchestrator | 2026-03-17 01:10:49.033657 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-17 01:10:49.033660 | orchestrator | Tuesday 17 March 2026 01:09:52 +0000 (0:00:03.662) 0:00:24.676 ********* 2026-03-17 01:10:49.033675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:10:49.033684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:10:49.033688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:10:49.033694 | orchestrator | 2026-03-17 01:10:49.033697 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:10:49.033700 | orchestrator | Tuesday 17 March 2026 01:09:57 +0000 (0:00:04.510) 0:00:29.187 ********* 2026-03-17 01:10:49.033706 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:10:49.033717 | orchestrator | 2026-03-17 01:10:49.033724 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-17 01:10:49.033727 | orchestrator | Tuesday 17 March 2026 01:09:57 +0000 (0:00:00.562) 0:00:29.749 ********* 2026-03-17 01:10:49.033730 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:49.033734 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:10:49.033737 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:10:49.033740 | orchestrator | 2026-03-17 01:10:49.033743 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-17 01:10:49.033746 | orchestrator | Tuesday 17 March 2026 01:10:01 +0000 (0:00:03.200) 0:00:32.950 ********* 2026-03-17 01:10:49.033750 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-17 01:10:49.033754 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-17 01:10:49.033757 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-17 01:10:49.033761 | orchestrator | 2026-03-17 01:10:49.033764 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-17 01:10:49.033767 | orchestrator | Tuesday 17 March 2026 01:10:02 +0000 (0:00:01.470) 0:00:34.420 ********* 2026-03-17 01:10:49.033771 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-17 01:10:49.033774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-17 01:10:49.033777 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-03-17 01:10:49.033780 | orchestrator | 2026-03-17 01:10:49.033783 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-17 01:10:49.033787 | orchestrator | Tuesday 17 March 2026 01:10:03 +0000 (0:00:01.269) 0:00:35.690 ********* 2026-03-17 01:10:49.033790 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:10:49.033793 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:10:49.033796 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:10:49.033799 | orchestrator | 2026-03-17 01:10:49.033803 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-17 01:10:49.033808 | orchestrator | Tuesday 17 March 2026 01:10:04 +0000 (0:00:00.699) 0:00:36.390 ********* 2026-03-17 01:10:49.033811 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.033814 | orchestrator | 2026-03-17 01:10:49.033818 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-17 01:10:49.033821 | orchestrator | Tuesday 17 March 2026 01:10:04 +0000 (0:00:00.131) 0:00:36.521 ********* 2026-03-17 01:10:49.033824 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.033827 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:49.033830 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:49.033833 | orchestrator | 2026-03-17 01:10:49.033836 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-17 01:10:49.033845 | orchestrator | Tuesday 17 March 2026 01:10:04 +0000 (0:00:00.268) 0:00:36.790 ********* 2026-03-17 01:10:49.033848 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:10:49.033855 | orchestrator | 2026-03-17 01:10:49.033858 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-17 01:10:49.033861 | orchestrator | Tuesday 17 March 2026 01:10:05 +0000 (0:00:00.664) 0:00:37.454 ********* 2026-03-17 01:10:49.033867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:10:49.033872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:10:49.033879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:10:49.033884 | orchestrator | 2026-03-17 01:10:49.033892 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-17 01:10:49.033897 | orchestrator | Tuesday 17 March 2026 01:10:09 +0000 (0:00:04.022) 0:00:41.477 ********* 2026-03-17 01:10:49.033905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:10:49.033920 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.033926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:10:49.033931 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:49.033940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:10:49.033946 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:49.033954 | orchestrator | 2026-03-17 01:10:49.033959 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-17 01:10:49.033963 | orchestrator | Tuesday 17 March 2026 01:10:12 +0000 (0:00:03.312) 0:00:44.789 ********* 2026-03-17 01:10:49.033969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:10:49.033974 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.033982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:10:49.033987 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:49.033992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-17 01:10:49.034000 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:49.034005 | orchestrator | 2026-03-17 01:10:49.034009 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-17 01:10:49.034041 | orchestrator | Tuesday 17 March 2026 01:10:15 +0000 (0:00:03.037) 0:00:47.826 ********* 2026-03-17 01:10:49.034045 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.034049 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:49.034052 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:49.034056 | orchestrator | 2026-03-17 01:10:49.034060 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-17 01:10:49.034066 | orchestrator | Tuesday 17 March 2026 01:10:18 +0000 (0:00:02.770) 0:00:50.597 ********* 2026-03-17 01:10:49.034078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:10:49.034098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:10:49.034105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-17 01:10:49.034122 | orchestrator | 2026-03-17 01:10:49.034127 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-17 01:10:49.034135 | orchestrator | Tuesday 17 March 2026 01:10:25 +0000 (0:00:06.556) 0:00:57.154 ********* 2026-03-17 01:10:49.034141 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:10:49.034146 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:10:49.034151 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:10:49.034156 | orchestrator | 2026-03-17 01:10:49.034162 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-17 01:10:49.034171 | orchestrator | Tuesday 17 March 2026 01:10:30 +0000 (0:00:04.764) 0:01:01.919 ********* 2026-03-17 01:10:49.034177 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:49.034182 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.034188 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:49.034193 | orchestrator | 2026-03-17 01:10:49.034196 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-17 01:10:49.034200 | orchestrator | Tuesday 17 March 2026 01:10:32 +0000 (0:00:02.918) 0:01:04.837 ********* 2026-03-17 01:10:49.034203 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:49.034207 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:49.034211 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.034214 | orchestrator | 2026-03-17 01:10:49.034218 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-17 01:10:49.034221 | orchestrator | Tuesday 17 March 2026 01:10:37 +0000 (0:00:04.410) 0:01:09.248 ********* 2026-03-17 01:10:49.034225 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.034231 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:49.034236 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:49.034241 | orchestrator | 2026-03-17 01:10:49.034246 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-17 01:10:49.034252 | orchestrator | Tuesday 17 March 2026 01:10:40 +0000 (0:00:03.316) 0:01:12.564 ********* 2026-03-17 01:10:49.034257 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.034263 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:49.034268 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:49.034274 | orchestrator | 2026-03-17 01:10:49.034279 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-17 01:10:49.034285 | orchestrator | Tuesday 17 March 2026 01:10:40 +0000 (0:00:00.228) 0:01:12.793 ********* 2026-03-17 01:10:49.034294 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-17 01:10:49.034297 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:10:49.034301 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-17 01:10:49.034304 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:10:49.034307 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-17 01:10:49.034310 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:10:49.034313 | orchestrator | 2026-03-17 01:10:49.034316 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-17 01:10:49.034320 | orchestrator | Tuesday 17 March 2026 01:10:44 +0000 (0:00:03.860) 0:01:16.654 ********* 2026-03-17 01:10:49.034326 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"msg": "The conditional check 'glance_backend_nvme | default(false) | bool)' failed. The error was: template error while templating string: unexpected ')'. String: {% if glance_backend_nvme | default(false) | bool) %} True {% else %} False {% endif %}. unexpected ')'\n\nThe error appears to be in '/ansible/roles/glance/tasks/config.yml': line 140, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generating 'hostnqn' file for glance_api\n ^ here\n"} 2026-03-17 01:10:49.034333 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"msg": "The conditional check 'glance_backend_nvme | default(false) | bool)' failed. The error was: template error while templating string: unexpected ')'. String: {% if glance_backend_nvme | default(false) | bool) %} True {% else %} False {% endif %}. unexpected ')'\n\nThe error appears to be in '/ansible/roles/glance/tasks/config.yml': line 140, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generating 'hostnqn' file for glance_api\n ^ here\n"} 2026-03-17 01:10:49.034338 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"msg": "The conditional check 'glance_backend_nvme | default(false) | bool)' failed. The error was: template error while templating string: unexpected ')'. String: {% if glance_backend_nvme | default(false) | bool) %} True {% else %} False {% endif %}. unexpected ')'\n\nThe error appears to be in '/ansible/roles/glance/tasks/config.yml': line 140, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Generating 'hostnqn' file for glance_api\n ^ here\n"} 2026-03-17 01:10:49.034348 | orchestrator | 2026-03-17 01:10:49.034354 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:10:49.034360 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=1  skipped=10  rescued=0 ignored=0 2026-03-17 01:10:49.034370 | orchestrator | testbed-node-1 : ok=13  changed=7  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2026-03-17 01:10:49.034376 | orchestrator | testbed-node-2 : ok=13  changed=7  unreachable=0 failed=1  skipped=9  rescued=0 ignored=0 2026-03-17 01:10:49.034381 | orchestrator | 2026-03-17 01:10:49.034387 | orchestrator | 2026-03-17 01:10:49.034392 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:10:49.034397 | orchestrator | Tuesday 17 March 2026 01:10:47 +0000 (0:00:03.061) 0:01:19.715 ********* 2026-03-17 01:10:49.034400 | orchestrator | =============================================================================== 2026-03-17 01:10:49.034404 | orchestrator | glance : Copying over config.json files for services -------------------- 6.56s 2026-03-17 01:10:49.034407 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 6.08s 2026-03-17 01:10:49.034411 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 4.76s 2026-03-17 01:10:49.034416 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.51s 2026-03-17 01:10:49.034421 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.41s 2026-03-17 01:10:49.034426 | orchestrator | service-ks-register : glance | Creating/deleting services --------------- 4.41s 2026-03-17 01:10:49.034431 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.02s 2026-03-17 01:10:49.034437 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.86s 2026-03-17 01:10:49.034442 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 3.66s 2026-03-17 01:10:49.034448 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.47s 2026-03-17 01:10:49.034453 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.32s 2026-03-17 01:10:49.034458 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.31s 2026-03-17 01:10:49.034463 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.20s 2026-03-17 01:10:49.034468 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.10s 2026-03-17 01:10:49.034474 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.06s 2026-03-17 01:10:49.034479 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.04s 2026-03-17 01:10:49.034484 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 2.92s 2026-03-17 01:10:49.034489 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 2.77s 2026-03-17 01:10:49.034494 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 2.61s 2026-03-17 01:10:49.034499 | orchestrator | glance : Copy over multiple ceph configs for Glance --------------------- 1.47s 2026-03-17 01:10:49.034505 | orchestrator | 2026-03-17 01:10:49 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:49.034510 | orchestrator | 2026-03-17 01:10:49 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:10:49.034520 | orchestrator | 2026-03-17 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:52.089001 | orchestrator | 2026-03-17 01:10:52 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:52.091708 | orchestrator | 2026-03-17 01:10:52 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:52.092433 | orchestrator | 2026-03-17 01:10:52 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:52.093463 | orchestrator | 2026-03-17 01:10:52 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:10:52.093502 | orchestrator | 2026-03-17 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:55.139537 | orchestrator | 2026-03-17 01:10:55 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:55.140509 | orchestrator | 2026-03-17 01:10:55 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:55.141753 | orchestrator | 2026-03-17 01:10:55 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:55.143071 | orchestrator | 2026-03-17 01:10:55 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:10:55.143132 | orchestrator | 2026-03-17 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:10:58.186589 | orchestrator | 2026-03-17 01:10:58 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:10:58.187824 | orchestrator | 2026-03-17 01:10:58 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:10:58.188938 | orchestrator | 2026-03-17 01:10:58 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:10:58.191933 | orchestrator | 2026-03-17 01:10:58 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:10:58.191980 | orchestrator | 2026-03-17 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:01.222005 | orchestrator | 2026-03-17 01:11:01 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:01.224183 | orchestrator | 2026-03-17 01:11:01 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:01.225617 | orchestrator | 2026-03-17 01:11:01 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:01.227432 | orchestrator | 2026-03-17 01:11:01 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:01.227508 | orchestrator | 2026-03-17 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:04.281946 | orchestrator | 2026-03-17 01:11:04 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:04.285173 | orchestrator | 2026-03-17 01:11:04 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:04.287619 | orchestrator | 2026-03-17 01:11:04 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:04.288345 | orchestrator | 2026-03-17 01:11:04 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:04.288397 | orchestrator | 2026-03-17 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:07.335544 | orchestrator | 2026-03-17 01:11:07 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:07.335601 | orchestrator | 2026-03-17 01:11:07 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:07.338731 | orchestrator | 2026-03-17 01:11:07 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:07.340465 | orchestrator | 2026-03-17 01:11:07 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:07.340919 | orchestrator | 2026-03-17 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:10.378168 | orchestrator | 2026-03-17 01:11:10 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:10.378251 | orchestrator | 2026-03-17 01:11:10 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:10.379231 | orchestrator | 2026-03-17 01:11:10 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:10.379797 | orchestrator | 2026-03-17 01:11:10 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:10.379824 | orchestrator | 2026-03-17 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:13.430819 | orchestrator | 2026-03-17 01:11:13 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:13.432743 | orchestrator | 2026-03-17 01:11:13 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:13.435540 | orchestrator | 2026-03-17 01:11:13 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:13.437601 | orchestrator | 2026-03-17 01:11:13 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:13.437706 | orchestrator | 2026-03-17 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:16.468960 | orchestrator | 2026-03-17 01:11:16 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:16.469526 | orchestrator | 2026-03-17 01:11:16 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:16.471051 | orchestrator | 2026-03-17 01:11:16 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:16.472301 | orchestrator | 2026-03-17 01:11:16 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:16.472341 | orchestrator | 2026-03-17 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:19.517011 | orchestrator | 2026-03-17 01:11:19 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:19.519368 | orchestrator | 2026-03-17 01:11:19 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:19.520812 | orchestrator | 2026-03-17 01:11:19 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:19.522990 | orchestrator | 2026-03-17 01:11:19 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:19.523085 | orchestrator | 2026-03-17 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:22.556109 | orchestrator | 2026-03-17 01:11:22 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:22.558791 | orchestrator | 2026-03-17 01:11:22 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:22.561803 | orchestrator | 2026-03-17 01:11:22 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:22.565443 | orchestrator | 2026-03-17 01:11:22 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:22.565491 | orchestrator | 2026-03-17 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:25.609143 | orchestrator | 2026-03-17 01:11:25 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:25.609411 | orchestrator | 2026-03-17 01:11:25 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:25.612434 | orchestrator | 2026-03-17 01:11:25 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:25.614160 | orchestrator | 2026-03-17 01:11:25 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:25.614840 | orchestrator | 2026-03-17 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:28.653322 | orchestrator | 2026-03-17 01:11:28 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:28.653394 | orchestrator | 2026-03-17 01:11:28 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:28.655358 | orchestrator | 2026-03-17 01:11:28 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:28.657255 | orchestrator | 2026-03-17 01:11:28 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:28.657421 | orchestrator | 2026-03-17 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:31.721227 | orchestrator | 2026-03-17 01:11:31 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:31.723679 | orchestrator | 2026-03-17 01:11:31 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:31.725665 | orchestrator | 2026-03-17 01:11:31 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:31.727267 | orchestrator | 2026-03-17 01:11:31 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:31.727512 | orchestrator | 2026-03-17 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:34.774929 | orchestrator | 2026-03-17 01:11:34 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:34.777113 | orchestrator | 2026-03-17 01:11:34 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:34.779039 | orchestrator | 2026-03-17 01:11:34 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:34.780616 | orchestrator | 2026-03-17 01:11:34 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:34.780655 | orchestrator | 2026-03-17 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:37.826427 | orchestrator | 2026-03-17 01:11:37 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:37.828215 | orchestrator | 2026-03-17 01:11:37 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state STARTED 2026-03-17 01:11:37.831525 | orchestrator | 2026-03-17 01:11:37 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:37.834762 | orchestrator | 2026-03-17 01:11:37 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:37.834833 | orchestrator | 2026-03-17 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:40.880802 | orchestrator | 2026-03-17 01:11:40 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:40.887582 | orchestrator | 2026-03-17 01:11:40 | INFO  | Task d823575e-5629-46af-be66-6a2c9ee27392 is in state SUCCESS 2026-03-17 01:11:40.889476 | orchestrator | 2026-03-17 01:11:40.889557 | orchestrator | 2026-03-17 01:11:40.889567 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:11:40.889575 | orchestrator | 2026-03-17 01:11:40.889582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:11:40.889613 | orchestrator | Tuesday 17 March 2026 01:08:49 +0000 (0:00:00.359) 0:00:00.359 ********* 2026-03-17 01:11:40.889621 | orchestrator | ok: [testbed-manager] 2026-03-17 01:11:40.889629 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:11:40.889636 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:11:40.889641 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:11:40.889647 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:11:40.889653 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:11:40.889660 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:11:40.889666 | orchestrator | 2026-03-17 01:11:40.889671 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:11:40.889678 | orchestrator | Tuesday 17 March 2026 01:08:50 +0000 (0:00:00.724) 0:00:01.084 ********* 2026-03-17 01:11:40.889685 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-17 01:11:40.889691 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-17 01:11:40.889696 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-17 01:11:40.889700 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-17 01:11:40.889703 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-17 01:11:40.889707 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-17 01:11:40.889711 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-17 01:11:40.889715 | orchestrator | 2026-03-17 01:11:40.889719 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-17 01:11:40.889723 | orchestrator | 2026-03-17 01:11:40.889727 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-17 01:11:40.889731 | orchestrator | Tuesday 17 March 2026 01:08:51 +0000 (0:00:00.980) 0:00:02.065 ********* 2026-03-17 01:11:40.889736 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:11:40.889741 | orchestrator | 2026-03-17 01:11:40.889745 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-17 01:11:40.889749 | orchestrator | Tuesday 17 March 2026 01:08:52 +0000 (0:00:01.497) 0:00:03.563 ********* 2026-03-17 01:11:40.889755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.889764 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-17 01:11:40.889770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.889797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.889808 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.889817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.889823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.889830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.889897 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.889906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890184 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:11:40.890190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890222 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890236 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890316 | orchestrator | 2026-03-17 01:11:40.890322 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-17 01:11:40.890329 | orchestrator | Tuesday 17 March 2026 01:08:56 +0000 (0:00:03.787) 0:00:07.350 ********* 2026-03-17 01:11:40.890336 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:11:40.890342 | orchestrator | 2026-03-17 01:11:40.890349 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-17 01:11:40.890355 | orchestrator | Tuesday 17 March 2026 01:08:57 +0000 (0:00:01.378) 0:00:08.729 ********* 2026-03-17 01:11:40.890363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890380 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-17 01:11:40.890792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890906 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.890912 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890930 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.890949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890975 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.890994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.891002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.891021 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:11:40.891026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.891030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.891034 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.891091 | orchestrator | 2026-03-17 01:11:40.891098 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-17 01:11:40.891105 | orchestrator | Tuesday 17 March 2026 01:09:04 +0000 (0:00:06.652) 0:00:15.382 ********* 2026-03-17 01:11:40.891112 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-17 01:11:40.891119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.891482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.891509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891516 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.891523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.891550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891556 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.891599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891630 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.891637 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:11:40.891649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.891687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891692 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.891698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.891705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891711 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.891717 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.891825 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.891840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891959 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.891965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891970 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.891976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.891981 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.891998 | orchestrator | 2026-03-17 01:11:40.892005 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-17 01:11:40.892012 | orchestrator | Tuesday 17 March 2026 01:09:07 +0000 (0:00:02.803) 0:00:18.185 ********* 2026-03-17 01:11:40.892018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.892025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.892033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892089 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-17 01:11:40.892103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.892109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892115 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.892121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.892135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892186 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.892193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892209 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.892237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892250 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.892259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892267 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.892273 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892279 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:11:40.892285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.892290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892300 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.892312 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.892339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.892346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892353 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.892359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.892371 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.892377 | orchestrator | 2026-03-17 01:11:40.892384 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-17 01:11:40.892390 | orchestrator | Tuesday 17 March 2026 01:09:10 +0000 (0:00:03.115) 0:00:21.300 ********* 2026-03-17 01:11:40.892396 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-17 01:11:40.892414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.892440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.892447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.892453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.892460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.892466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.892472 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.892479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892520 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892562 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:11:40.892566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.892591 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.892609 | orchestrator | 2026-03-17 01:11:40.892614 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-17 01:11:40.892619 | orchestrator | Tuesday 17 March 2026 01:09:16 +0000 (0:00:05.811) 0:00:27.112 ********* 2026-03-17 01:11:40.892623 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:11:40.892628 | orchestrator | 2026-03-17 01:11:40.892632 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-17 01:11:40.892637 | orchestrator | Tuesday 17 March 2026 01:09:17 +0000 (0:00:00.831) 0:00:27.943 ********* 2026-03-17 01:11:40.892645 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.892649 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.892654 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.892658 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.892662 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.892667 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.892671 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.892675 | orchestrator | 2026-03-17 01:11:40.892680 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-17 01:11:40.892684 | orchestrator | Tuesday 17 March 2026 01:09:17 +0000 (0:00:00.816) 0:00:28.760 ********* 2026-03-17 01:11:40.892689 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:11:40.892693 | orchestrator | 2026-03-17 01:11:40.892698 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-17 01:11:40.892702 | orchestrator | Tuesday 17 March 2026 01:09:18 +0000 (0:00:00.815) 0:00:29.575 ********* 2026-03-17 01:11:40.892707 | orchestrator | [WARNING]: Skipped 2026-03-17 01:11:40.892713 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892717 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-17 01:11:40.892722 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892726 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-17 01:11:40.892731 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:11:40.892735 | orchestrator | [WARNING]: Skipped 2026-03-17 01:11:40.892739 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892744 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-17 01:11:40.892750 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892756 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-17 01:11:40.892762 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-17 01:11:40.892768 | orchestrator | [WARNING]: Skipped 2026-03-17 01:11:40.892779 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892790 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-17 01:11:40.892796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892801 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-17 01:11:40.892808 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:11:40.892814 | orchestrator | [WARNING]: Skipped 2026-03-17 01:11:40.892824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892830 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-17 01:11:40.892836 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892842 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-17 01:11:40.892848 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:11:40.892854 | orchestrator | [WARNING]: Skipped 2026-03-17 01:11:40.892860 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892866 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-17 01:11:40.892875 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892882 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-17 01:11:40.892888 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-17 01:11:40.892894 | orchestrator | [WARNING]: Skipped 2026-03-17 01:11:40.892900 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892906 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-17 01:11:40.892918 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892923 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-17 01:11:40.892929 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:11:40.892935 | orchestrator | [WARNING]: Skipped 2026-03-17 01:11:40.892941 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892947 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-17 01:11:40.892953 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-17 01:11:40.892960 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-17 01:11:40.892964 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:11:40.892968 | orchestrator | 2026-03-17 01:11:40.892972 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-17 01:11:40.892976 | orchestrator | Tuesday 17 March 2026 01:09:21 +0000 (0:00:03.316) 0:00:32.891 ********* 2026-03-17 01:11:40.892980 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:11:40.892985 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.892990 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:11:40.892997 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.893003 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:11:40.893008 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:11:40.893015 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.893021 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.893026 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:11:40.893032 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.893053 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-17 01:11:40.893059 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.893065 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-17 01:11:40.893075 | orchestrator | 2026-03-17 01:11:40.893083 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-17 01:11:40.893088 | orchestrator | Tuesday 17 March 2026 01:09:35 +0000 (0:00:13.611) 0:00:46.503 ********* 2026-03-17 01:11:40.893095 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:11:40.893101 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.893107 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:11:40.893113 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.893118 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:11:40.893124 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.893130 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:11:40.893136 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.893141 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:11:40.893147 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.893152 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-17 01:11:40.893158 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.893165 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-17 01:11:40.893171 | orchestrator | 2026-03-17 01:11:40.893176 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-17 01:11:40.893193 | orchestrator | Tuesday 17 March 2026 01:09:39 +0000 (0:00:03.745) 0:00:50.248 ********* 2026-03-17 01:11:40.893200 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:11:40.893207 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.893219 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:11:40.893226 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:11:40.893233 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.893239 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.893245 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-17 01:11:40.893252 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:11:40.893258 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.893264 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:11:40.893270 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.893276 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-17 01:11:40.893282 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.893288 | orchestrator | 2026-03-17 01:11:40.893294 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-17 01:11:40.893300 | orchestrator | Tuesday 17 March 2026 01:09:40 +0000 (0:00:01.543) 0:00:51.792 ********* 2026-03-17 01:11:40.893307 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:11:40.893313 | orchestrator | 2026-03-17 01:11:40.893319 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-17 01:11:40.893326 | orchestrator | Tuesday 17 March 2026 01:09:41 +0000 (0:00:00.791) 0:00:52.583 ********* 2026-03-17 01:11:40.893331 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.893337 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.893343 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.893349 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.893356 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.893362 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.893368 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.893374 | orchestrator | 2026-03-17 01:11:40.893380 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-17 01:11:40.893386 | orchestrator | Tuesday 17 March 2026 01:09:42 +0000 (0:00:00.849) 0:00:53.433 ********* 2026-03-17 01:11:40.893391 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.893515 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.893524 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.893530 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.893536 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:11:40.893555 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:11:40.893559 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:11:40.893563 | orchestrator | 2026-03-17 01:11:40.893568 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-17 01:11:40.893572 | orchestrator | Tuesday 17 March 2026 01:09:44 +0000 (0:00:01.928) 0:00:55.361 ********* 2026-03-17 01:11:40.893606 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:11:40.893612 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.893616 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:11:40.893620 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.893632 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:11:40.893636 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.893640 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:11:40.893644 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.893648 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:11:40.893652 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.893656 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:11:40.893660 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.893664 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-17 01:11:40.893668 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.893672 | orchestrator | 2026-03-17 01:11:40.893675 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-17 01:11:40.893679 | orchestrator | Tuesday 17 March 2026 01:09:45 +0000 (0:00:01.104) 0:00:56.465 ********* 2026-03-17 01:11:40.893683 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:11:40.893687 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.893691 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:11:40.893695 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.893699 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:11:40.893708 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.893712 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:11:40.893716 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.893720 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-17 01:11:40.893728 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:11:40.893732 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-17 01:11:40.893736 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.893740 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.893744 | orchestrator | 2026-03-17 01:11:40.893748 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-17 01:11:40.893752 | orchestrator | Tuesday 17 March 2026 01:09:46 +0000 (0:00:01.286) 0:00:57.752 ********* 2026-03-17 01:11:40.893756 | orchestrator | [WARNING]: Skipped 2026-03-17 01:11:40.893761 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-17 01:11:40.893765 | orchestrator | due to this access issue: 2026-03-17 01:11:40.893769 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-17 01:11:40.893773 | orchestrator | not a directory 2026-03-17 01:11:40.893776 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-17 01:11:40.893780 | orchestrator | 2026-03-17 01:11:40.893784 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-17 01:11:40.893788 | orchestrator | Tuesday 17 March 2026 01:09:47 +0000 (0:00:01.015) 0:00:58.767 ********* 2026-03-17 01:11:40.893792 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.893796 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.893799 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.893803 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.893807 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.893811 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.893819 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.893823 | orchestrator | 2026-03-17 01:11:40.893827 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-17 01:11:40.893831 | orchestrator | Tuesday 17 March 2026 01:09:48 +0000 (0:00:00.604) 0:00:59.372 ********* 2026-03-17 01:11:40.893835 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.893839 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.893842 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.893846 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.893850 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.893853 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.893857 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.893861 | orchestrator | 2026-03-17 01:11:40.893865 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-03-17 01:11:40.893869 | orchestrator | Tuesday 17 March 2026 01:09:49 +0000 (0:00:00.852) 0:01:00.225 ********* 2026-03-17 01:11:40.893874 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-17 01:11:40.893880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.893891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.893899 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.893904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.893912 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.893916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.893920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.893924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-17 01:11:40.893929 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.893936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.893944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.893949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.893956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.893961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.893965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.893969 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:11:40.893975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.893983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.893990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.893995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.893999 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.894003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.894007 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.894033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.894110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-17 01:11:40.894124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.894128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.894132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-17 01:11:40.894136 | orchestrator | 2026-03-17 01:11:40.894140 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-03-17 01:11:40.894143 | orchestrator | Tuesday 17 March 2026 01:09:54 +0000 (0:00:04.978) 0:01:05.204 ********* 2026-03-17 01:11:40.894147 | orchestrator | changed: [testbed-manager] => { 2026-03-17 01:11:40.894151 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:11:40.894155 | orchestrator | } 2026-03-17 01:11:40.894159 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:11:40.894163 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:11:40.894167 | orchestrator | } 2026-03-17 01:11:40.894171 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:11:40.894175 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:11:40.894179 | orchestrator | } 2026-03-17 01:11:40.894182 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:11:40.894186 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:11:40.894190 | orchestrator | } 2026-03-17 01:11:40.894194 | orchestrator | changed: [testbed-node-3] => { 2026-03-17 01:11:40.894198 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:11:40.894201 | orchestrator | } 2026-03-17 01:11:40.894205 | orchestrator | changed: [testbed-node-4] => { 2026-03-17 01:11:40.894209 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:11:40.894213 | orchestrator | } 2026-03-17 01:11:40.894217 | orchestrator | changed: [testbed-node-5] => { 2026-03-17 01:11:40.894220 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:11:40.894224 | orchestrator | } 2026-03-17 01:11:40.894228 | orchestrator | 2026-03-17 01:11:40.894232 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:11:40.894236 | orchestrator | Tuesday 17 March 2026 01:09:55 +0000 (0:00:01.054) 0:01:06.258 ********* 2026-03-17 01:11:40.894240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.894244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.894274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894303 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-17 01:11:40.894310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.894317 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.894327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894340 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894374 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:11:40.894385 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:11:40.894390 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-17 01:11:40.894396 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:11:40.894402 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:11:40.894408 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.894415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.894421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894438 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:11:40.894450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.894462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894474 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:11:40.894479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-17 01:11:40.894486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-17 01:11:40.894502 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:11:40.894508 | orchestrator | 2026-03-17 01:11:40.894516 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-17 01:11:40.894520 | orchestrator | Tuesday 17 March 2026 01:09:57 +0000 (0:00:01.861) 0:01:08.120 ********* 2026-03-17 01:11:40.894524 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-17 01:11:40.894528 | orchestrator | skipping: [testbed-manager] 2026-03-17 01:11:40.894532 | orchestrator | 2026-03-17 01:11:40.894535 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:11:40.894539 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:01.026) 0:01:09.147 ********* 2026-03-17 01:11:40.894543 | orchestrator | 2026-03-17 01:11:40.894547 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:11:40.894551 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:00.197) 0:01:09.344 ********* 2026-03-17 01:11:40.894555 | orchestrator | 2026-03-17 01:11:40.894558 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:11:40.894562 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:00.085) 0:01:09.430 ********* 2026-03-17 01:11:40.894566 | orchestrator | 2026-03-17 01:11:40.894570 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:11:40.894574 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:00.059) 0:01:09.489 ********* 2026-03-17 01:11:40.894578 | orchestrator | 2026-03-17 01:11:40.894581 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:11:40.894585 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:00.057) 0:01:09.547 ********* 2026-03-17 01:11:40.894589 | orchestrator | 2026-03-17 01:11:40.894593 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:11:40.894599 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:00.058) 0:01:09.605 ********* 2026-03-17 01:11:40.894603 | orchestrator | 2026-03-17 01:11:40.894607 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-17 01:11:40.894611 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:00.058) 0:01:09.663 ********* 2026-03-17 01:11:40.894615 | orchestrator | 2026-03-17 01:11:40.894619 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-17 01:11:40.894625 | orchestrator | Tuesday 17 March 2026 01:09:58 +0000 (0:00:00.078) 0:01:09.742 ********* 2026-03-17 01:11:40.894629 | orchestrator | changed: [testbed-manager] 2026-03-17 01:11:40.894633 | orchestrator | 2026-03-17 01:11:40.894637 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-17 01:11:40.894641 | orchestrator | Tuesday 17 March 2026 01:10:19 +0000 (0:00:20.261) 0:01:30.004 ********* 2026-03-17 01:11:40.894645 | orchestrator | changed: [testbed-manager] 2026-03-17 01:11:40.894648 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:11:40.894652 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:11:40.894656 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:11:40.894660 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:11:40.894664 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:11:40.894667 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:11:40.894671 | orchestrator | 2026-03-17 01:11:40.894675 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-17 01:11:40.894679 | orchestrator | Tuesday 17 March 2026 01:10:33 +0000 (0:00:13.930) 0:01:43.934 ********* 2026-03-17 01:11:40.894683 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:11:40.894686 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:11:40.894690 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:11:40.894694 | orchestrator | 2026-03-17 01:11:40.894698 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-17 01:11:40.894707 | orchestrator | Tuesday 17 March 2026 01:10:43 +0000 (0:00:10.402) 0:01:54.337 ********* 2026-03-17 01:11:40.894712 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:11:40.894718 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:11:40.894725 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:11:40.894735 | orchestrator | 2026-03-17 01:11:40.894740 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-17 01:11:40.894746 | orchestrator | Tuesday 17 March 2026 01:10:48 +0000 (0:00:04.784) 0:01:59.121 ********* 2026-03-17 01:11:40.894751 | orchestrator | changed: [testbed-manager] 2026-03-17 01:11:40.894757 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:11:40.894764 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:11:40.894770 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:11:40.894775 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:11:40.894780 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:11:40.894786 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:11:40.894792 | orchestrator | 2026-03-17 01:11:40.894797 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-17 01:11:40.894803 | orchestrator | Tuesday 17 March 2026 01:11:01 +0000 (0:00:13.356) 0:02:12.478 ********* 2026-03-17 01:11:40.894809 | orchestrator | changed: [testbed-manager] 2026-03-17 01:11:40.894814 | orchestrator | 2026-03-17 01:11:40.894820 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-17 01:11:40.894825 | orchestrator | Tuesday 17 March 2026 01:11:13 +0000 (0:00:12.075) 0:02:24.553 ********* 2026-03-17 01:11:40.894830 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:11:40.894836 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:11:40.894841 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:11:40.894847 | orchestrator | 2026-03-17 01:11:40.894852 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-17 01:11:40.894858 | orchestrator | Tuesday 17 March 2026 01:11:23 +0000 (0:00:10.030) 0:02:34.584 ********* 2026-03-17 01:11:40.894864 | orchestrator | changed: [testbed-manager] 2026-03-17 01:11:40.894869 | orchestrator | 2026-03-17 01:11:40.894874 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-17 01:11:40.894880 | orchestrator | Tuesday 17 March 2026 01:11:29 +0000 (0:00:05.539) 0:02:40.123 ********* 2026-03-17 01:11:40.894885 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:11:40.894890 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:11:40.894896 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:11:40.894902 | orchestrator | 2026-03-17 01:11:40.894908 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:11:40.894914 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-03-17 01:11:40.894920 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:11:40.894926 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:11:40.894932 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-17 01:11:40.894937 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-17 01:11:40.894943 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-17 01:11:40.894949 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-17 01:11:40.894962 | orchestrator | 2026-03-17 01:11:40.894968 | orchestrator | 2026-03-17 01:11:40.894974 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:11:40.894984 | orchestrator | Tuesday 17 March 2026 01:11:39 +0000 (0:00:10.202) 0:02:50.325 ********* 2026-03-17 01:11:40.894990 | orchestrator | =============================================================================== 2026-03-17 01:11:40.894996 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.26s 2026-03-17 01:11:40.895002 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.93s 2026-03-17 01:11:40.895014 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.61s 2026-03-17 01:11:40.895021 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.36s 2026-03-17 01:11:40.895027 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.08s 2026-03-17 01:11:40.895032 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.40s 2026-03-17 01:11:40.895077 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.20s 2026-03-17 01:11:40.895085 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.03s 2026-03-17 01:11:40.895089 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.65s 2026-03-17 01:11:40.895093 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.81s 2026-03-17 01:11:40.895097 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.54s 2026-03-17 01:11:40.895101 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 4.98s 2026-03-17 01:11:40.895105 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 4.78s 2026-03-17 01:11:40.895108 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.79s 2026-03-17 01:11:40.895112 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.75s 2026-03-17 01:11:40.895116 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.32s 2026-03-17 01:11:40.895120 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.12s 2026-03-17 01:11:40.895124 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.80s 2026-03-17 01:11:40.895128 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.93s 2026-03-17 01:11:40.895131 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.86s 2026-03-17 01:11:40.895135 | orchestrator | 2026-03-17 01:11:40 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:11:40.895140 | orchestrator | 2026-03-17 01:11:40 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:40.895144 | orchestrator | 2026-03-17 01:11:40 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:40.895151 | orchestrator | 2026-03-17 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:43.948431 | orchestrator | 2026-03-17 01:11:43 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:43.951189 | orchestrator | 2026-03-17 01:11:43 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:11:43.953096 | orchestrator | 2026-03-17 01:11:43 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:43.955177 | orchestrator | 2026-03-17 01:11:43 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:43.955218 | orchestrator | 2026-03-17 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:46.997395 | orchestrator | 2026-03-17 01:11:46 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:46.999688 | orchestrator | 2026-03-17 01:11:46 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:11:47.002781 | orchestrator | 2026-03-17 01:11:47 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:47.005111 | orchestrator | 2026-03-17 01:11:47 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:47.005150 | orchestrator | 2026-03-17 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:50.062004 | orchestrator | 2026-03-17 01:11:50 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:50.063518 | orchestrator | 2026-03-17 01:11:50 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:11:50.064225 | orchestrator | 2026-03-17 01:11:50 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:50.065224 | orchestrator | 2026-03-17 01:11:50 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:50.065258 | orchestrator | 2026-03-17 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:53.103433 | orchestrator | 2026-03-17 01:11:53 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:53.107309 | orchestrator | 2026-03-17 01:11:53 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:11:53.108983 | orchestrator | 2026-03-17 01:11:53 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:53.113405 | orchestrator | 2026-03-17 01:11:53 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:53.113463 | orchestrator | 2026-03-17 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:56.155787 | orchestrator | 2026-03-17 01:11:56 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:56.158991 | orchestrator | 2026-03-17 01:11:56 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:11:56.161224 | orchestrator | 2026-03-17 01:11:56 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:56.163725 | orchestrator | 2026-03-17 01:11:56 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:56.163765 | orchestrator | 2026-03-17 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:11:59.204584 | orchestrator | 2026-03-17 01:11:59 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:11:59.205995 | orchestrator | 2026-03-17 01:11:59 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:11:59.207515 | orchestrator | 2026-03-17 01:11:59 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:11:59.208815 | orchestrator | 2026-03-17 01:11:59 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:11:59.208916 | orchestrator | 2026-03-17 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:02.257951 | orchestrator | 2026-03-17 01:12:02 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:02.259852 | orchestrator | 2026-03-17 01:12:02 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:02.261381 | orchestrator | 2026-03-17 01:12:02 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:02.262905 | orchestrator | 2026-03-17 01:12:02 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:02.263245 | orchestrator | 2026-03-17 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:05.301674 | orchestrator | 2026-03-17 01:12:05 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:05.303992 | orchestrator | 2026-03-17 01:12:05 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:05.305303 | orchestrator | 2026-03-17 01:12:05 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:05.307254 | orchestrator | 2026-03-17 01:12:05 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:05.307303 | orchestrator | 2026-03-17 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:08.353325 | orchestrator | 2026-03-17 01:12:08 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:08.355329 | orchestrator | 2026-03-17 01:12:08 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:08.356824 | orchestrator | 2026-03-17 01:12:08 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:08.358604 | orchestrator | 2026-03-17 01:12:08 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:08.358656 | orchestrator | 2026-03-17 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:11.389135 | orchestrator | 2026-03-17 01:12:11 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:11.392224 | orchestrator | 2026-03-17 01:12:11 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:11.392303 | orchestrator | 2026-03-17 01:12:11 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:11.398328 | orchestrator | 2026-03-17 01:12:11 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:11.398389 | orchestrator | 2026-03-17 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:14.440072 | orchestrator | 2026-03-17 01:12:14 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:14.441897 | orchestrator | 2026-03-17 01:12:14 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:14.444108 | orchestrator | 2026-03-17 01:12:14 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:14.445911 | orchestrator | 2026-03-17 01:12:14 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:14.445946 | orchestrator | 2026-03-17 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:17.488141 | orchestrator | 2026-03-17 01:12:17 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:17.490380 | orchestrator | 2026-03-17 01:12:17 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:17.491841 | orchestrator | 2026-03-17 01:12:17 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:17.493422 | orchestrator | 2026-03-17 01:12:17 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:17.493497 | orchestrator | 2026-03-17 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:20.532776 | orchestrator | 2026-03-17 01:12:20 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:20.533933 | orchestrator | 2026-03-17 01:12:20 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:20.534480 | orchestrator | 2026-03-17 01:12:20 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:20.535403 | orchestrator | 2026-03-17 01:12:20 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:20.535790 | orchestrator | 2026-03-17 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:23.575111 | orchestrator | 2026-03-17 01:12:23 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:23.576380 | orchestrator | 2026-03-17 01:12:23 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:23.577254 | orchestrator | 2026-03-17 01:12:23 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:23.578225 | orchestrator | 2026-03-17 01:12:23 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:23.578493 | orchestrator | 2026-03-17 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:26.627108 | orchestrator | 2026-03-17 01:12:26 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:26.628703 | orchestrator | 2026-03-17 01:12:26 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:26.629634 | orchestrator | 2026-03-17 01:12:26 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:26.634627 | orchestrator | 2026-03-17 01:12:26 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:26.634686 | orchestrator | 2026-03-17 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:29.671456 | orchestrator | 2026-03-17 01:12:29 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:29.673614 | orchestrator | 2026-03-17 01:12:29 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:29.675195 | orchestrator | 2026-03-17 01:12:29 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:29.676747 | orchestrator | 2026-03-17 01:12:29 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:29.676897 | orchestrator | 2026-03-17 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:32.723334 | orchestrator | 2026-03-17 01:12:32 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:32.724827 | orchestrator | 2026-03-17 01:12:32 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:32.726302 | orchestrator | 2026-03-17 01:12:32 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:32.727714 | orchestrator | 2026-03-17 01:12:32 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:32.727795 | orchestrator | 2026-03-17 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:35.776551 | orchestrator | 2026-03-17 01:12:35 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:35.780235 | orchestrator | 2026-03-17 01:12:35 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:35.782062 | orchestrator | 2026-03-17 01:12:35 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:35.785220 | orchestrator | 2026-03-17 01:12:35 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:35.785362 | orchestrator | 2026-03-17 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:38.836285 | orchestrator | 2026-03-17 01:12:38 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:38.838658 | orchestrator | 2026-03-17 01:12:38 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:38.840912 | orchestrator | 2026-03-17 01:12:38 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:38.843280 | orchestrator | 2026-03-17 01:12:38 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:38.843335 | orchestrator | 2026-03-17 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:41.893301 | orchestrator | 2026-03-17 01:12:41 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:41.895206 | orchestrator | 2026-03-17 01:12:41 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:41.897209 | orchestrator | 2026-03-17 01:12:41 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:41.899483 | orchestrator | 2026-03-17 01:12:41 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:41.899902 | orchestrator | 2026-03-17 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:44.968270 | orchestrator | 2026-03-17 01:12:44 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:44.968528 | orchestrator | 2026-03-17 01:12:44 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:44.969476 | orchestrator | 2026-03-17 01:12:44 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:44.971259 | orchestrator | 2026-03-17 01:12:44 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:44.971292 | orchestrator | 2026-03-17 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:48.022531 | orchestrator | 2026-03-17 01:12:48 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:48.027198 | orchestrator | 2026-03-17 01:12:48 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:48.029048 | orchestrator | 2026-03-17 01:12:48 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:48.030713 | orchestrator | 2026-03-17 01:12:48 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:48.030752 | orchestrator | 2026-03-17 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:51.084235 | orchestrator | 2026-03-17 01:12:51 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:51.085726 | orchestrator | 2026-03-17 01:12:51 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:51.088030 | orchestrator | 2026-03-17 01:12:51 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:51.089897 | orchestrator | 2026-03-17 01:12:51 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:51.090006 | orchestrator | 2026-03-17 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:54.139573 | orchestrator | 2026-03-17 01:12:54 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state STARTED 2026-03-17 01:12:54.139628 | orchestrator | 2026-03-17 01:12:54 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:54.140499 | orchestrator | 2026-03-17 01:12:54 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:54.142429 | orchestrator | 2026-03-17 01:12:54 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:54.142473 | orchestrator | 2026-03-17 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:12:57.195454 | orchestrator | 2026-03-17 01:12:57 | INFO  | Task eeed048b-4855-49f4-956f-70071c09a839 is in state SUCCESS 2026-03-17 01:12:57.196662 | orchestrator | 2026-03-17 01:12:57.196705 | orchestrator | 2026-03-17 01:12:57.196711 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:12:57.196715 | orchestrator | 2026-03-17 01:12:57.196719 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:12:57.196731 | orchestrator | Tuesday 17 March 2026 01:09:53 +0000 (0:00:00.304) 0:00:00.304 ********* 2026-03-17 01:12:57.196735 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:12:57.196740 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:12:57.196744 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:12:57.196747 | orchestrator | 2026-03-17 01:12:57.196751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:12:57.196755 | orchestrator | Tuesday 17 March 2026 01:09:54 +0000 (0:00:00.443) 0:00:00.748 ********* 2026-03-17 01:12:57.196759 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-17 01:12:57.196763 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-17 01:12:57.196767 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-17 01:12:57.196771 | orchestrator | 2026-03-17 01:12:57.196774 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-17 01:12:57.196778 | orchestrator | 2026-03-17 01:12:57.196782 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:12:57.196786 | orchestrator | Tuesday 17 March 2026 01:09:54 +0000 (0:00:00.554) 0:00:01.302 ********* 2026-03-17 01:12:57.196789 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:12:57.196794 | orchestrator | 2026-03-17 01:12:57.196798 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-03-17 01:12:57.196801 | orchestrator | Tuesday 17 March 2026 01:09:56 +0000 (0:00:01.353) 0:00:02.656 ********* 2026-03-17 01:12:57.196805 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-03-17 01:12:57.196809 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-17 01:12:57.196813 | orchestrator | 2026-03-17 01:12:57.196817 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-03-17 01:12:57.196821 | orchestrator | Tuesday 17 March 2026 01:10:02 +0000 (0:00:06.124) 0:00:08.781 ********* 2026-03-17 01:12:57.196824 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-03-17 01:12:57.196828 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-03-17 01:12:57.196833 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-17 01:12:57.196837 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-17 01:12:57.196841 | orchestrator | 2026-03-17 01:12:57.196844 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-17 01:12:57.196848 | orchestrator | Tuesday 17 March 2026 01:10:13 +0000 (0:00:11.121) 0:00:19.903 ********* 2026-03-17 01:12:57.196852 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:12:57.196856 | orchestrator | 2026-03-17 01:12:57.196859 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-17 01:12:57.196863 | orchestrator | Tuesday 17 March 2026 01:10:16 +0000 (0:00:02.715) 0:00:22.618 ********* 2026-03-17 01:12:57.196867 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-17 01:12:57.196871 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:12:57.196875 | orchestrator | 2026-03-17 01:12:57.196879 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-17 01:12:57.196882 | orchestrator | Tuesday 17 March 2026 01:10:19 +0000 (0:00:03.139) 0:00:25.758 ********* 2026-03-17 01:12:57.196896 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:12:57.196900 | orchestrator | 2026-03-17 01:12:57.196904 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-03-17 01:12:57.196908 | orchestrator | Tuesday 17 March 2026 01:10:22 +0000 (0:00:03.198) 0:00:28.956 ********* 2026-03-17 01:12:57.196911 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-17 01:12:57.196915 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-17 01:12:57.196919 | orchestrator | 2026-03-17 01:12:57.196923 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-17 01:12:57.196926 | orchestrator | Tuesday 17 March 2026 01:10:30 +0000 (0:00:07.798) 0:00:36.755 ********* 2026-03-17 01:12:57.197012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.197022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.197027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.197032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.197040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.197044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.197054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.197059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.197063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.197068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.197098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.197104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.197108 | orchestrator | 2026-03-17 01:12:57.197114 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:12:57.197119 | orchestrator | Tuesday 17 March 2026 01:10:32 +0000 (0:00:02.591) 0:00:39.346 ********* 2026-03-17 01:12:57.197123 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.197126 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:12:57.197132 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:12:57.197136 | orchestrator | 2026-03-17 01:12:57.197140 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:12:57.197144 | orchestrator | Tuesday 17 March 2026 01:10:33 +0000 (0:00:00.517) 0:00:39.864 ********* 2026-03-17 01:12:57.197148 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:12:57.197152 | orchestrator | 2026-03-17 01:12:57.197156 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-17 01:12:57.197160 | orchestrator | Tuesday 17 March 2026 01:10:34 +0000 (0:00:01.308) 0:00:41.173 ********* 2026-03-17 01:12:57.197164 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-17 01:12:57.197191 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-17 01:12:57.197195 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-17 01:12:57.197199 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-17 01:12:57.197203 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-17 01:12:57.197207 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-17 01:12:57.197408 | orchestrator | 2026-03-17 01:12:57.197418 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-17 01:12:57.197423 | orchestrator | Tuesday 17 March 2026 01:10:37 +0000 (0:00:02.305) 0:00:43.478 ********* 2026-03-17 01:12:57.197429 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-17 01:12:57.197440 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-17 01:12:57.197450 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-17 01:12:57.197459 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-17 01:12:57.197464 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-17 01:12:57.197473 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-17 01:12:57.197478 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-17 01:12:57.197488 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-17 01:12:57.197493 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-17 01:12:57.197503 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-17 01:12:57.197510 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-03-17 01:12:57.197521 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-03-17 01:12:57.197528 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-17 01:12:57.197849 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-17 01:12:57.197865 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-17 01:12:57.197869 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-17 01:12:57.197889 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-17 01:12:57.197894 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-17 01:12:57.197903 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-17 01:12:57.197908 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-17 01:12:57.197912 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-03-17 01:12:57.197927 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-17 01:12:57.198007 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-17 01:12:57.198047 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-03-17 01:12:57.198052 | orchestrator | 2026-03-17 01:12:57.198056 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-17 01:12:57.198060 | orchestrator | Tuesday 17 March 2026 01:10:44 +0000 (0:00:07.157) 0:00:50.635 ********* 2026-03-17 01:12:57.198064 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-17 01:12:57.198069 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-17 01:12:57.198073 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-17 01:12:57.198077 | orchestrator | 2026-03-17 01:12:57.198081 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-17 01:12:57.198084 | orchestrator | Tuesday 17 March 2026 01:10:46 +0000 (0:00:01.797) 0:00:52.433 ********* 2026-03-17 01:12:57.198088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-17 01:12:57.198092 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-17 01:12:57.198096 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-03-17 01:12:57.198100 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-03-17 01:12:57.198104 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-03-17 01:12:57.198107 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-03-17 01:12:57.198111 | orchestrator | 2026-03-17 01:12:57.198115 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-17 01:12:57.198119 | orchestrator | Tuesday 17 March 2026 01:10:48 +0000 (0:00:02.643) 0:00:55.076 ********* 2026-03-17 01:12:57.198123 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-17 01:12:57.198127 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-17 01:12:57.198131 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-17 01:12:57.198148 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-17 01:12:57.198152 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-17 01:12:57.198159 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-17 01:12:57.198163 | orchestrator | 2026-03-17 01:12:57.198167 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-17 01:12:57.198173 | orchestrator | Tuesday 17 March 2026 01:10:49 +0000 (0:00:01.112) 0:00:56.188 ********* 2026-03-17 01:12:57.198177 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.198182 | orchestrator | 2026-03-17 01:12:57.198185 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-17 01:12:57.198189 | orchestrator | Tuesday 17 March 2026 01:10:50 +0000 (0:00:00.314) 0:00:56.503 ********* 2026-03-17 01:12:57.198193 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.198197 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:12:57.198200 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:12:57.198204 | orchestrator | 2026-03-17 01:12:57.198208 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:12:57.198212 | orchestrator | Tuesday 17 March 2026 01:10:50 +0000 (0:00:00.736) 0:00:57.239 ********* 2026-03-17 01:12:57.198216 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:12:57.198220 | orchestrator | 2026-03-17 01:12:57.198224 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-17 01:12:57.198227 | orchestrator | Tuesday 17 March 2026 01:10:51 +0000 (0:00:00.881) 0:00:58.121 ********* 2026-03-17 01:12:57.198232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.198236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.198257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.198273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198358 | orchestrator | 2026-03-17 01:12:57.198364 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-17 01:12:57.198370 | orchestrator | Tuesday 17 March 2026 01:10:56 +0000 (0:00:04.381) 0:01:02.503 ********* 2026-03-17 01:12:57.198375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.198389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198436 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.198441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.198445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198474 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:12:57.198482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.198487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198502 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:12:57.198506 | orchestrator | 2026-03-17 01:12:57.198510 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-17 01:12:57.198514 | orchestrator | Tuesday 17 March 2026 01:10:56 +0000 (0:00:00.872) 0:01:03.375 ********* 2026-03-17 01:12:57.198523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.198527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198546 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.198552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.198563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198592 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:12:57.198599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.198610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.198634 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:12:57.198638 | orchestrator | 2026-03-17 01:12:57.198642 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-17 01:12:57.198646 | orchestrator | Tuesday 17 March 2026 01:10:57 +0000 (0:00:00.837) 0:01:04.213 ********* 2026-03-17 01:12:57.198650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.198654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.198661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.198668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198714 | orchestrator | 2026-03-17 01:12:57.198718 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-17 01:12:57.198722 | orchestrator | Tuesday 17 March 2026 01:11:02 +0000 (0:00:04.382) 0:01:08.596 ********* 2026-03-17 01:12:57.198726 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-03-17 01:12:57.198733 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.198736 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-03-17 01:12:57.198740 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:12:57.198744 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-03-17 01:12:57.198748 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:12:57.198752 | orchestrator | 2026-03-17 01:12:57.198756 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-03-17 01:12:57.198759 | orchestrator | Tuesday 17 March 2026 01:11:03 +0000 (0:00:00.807) 0:01:09.403 ********* 2026-03-17 01:12:57.198763 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:12:57.198769 | orchestrator | 2026-03-17 01:12:57.198777 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-03-17 01:12:57.198787 | orchestrator | Tuesday 17 March 2026 01:11:03 +0000 (0:00:00.937) 0:01:10.341 ********* 2026-03-17 01:12:57.198793 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.198799 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:12:57.198805 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:12:57.198810 | orchestrator | 2026-03-17 01:12:57.198815 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-17 01:12:57.198822 | orchestrator | Tuesday 17 March 2026 01:11:06 +0000 (0:00:02.472) 0:01:12.814 ********* 2026-03-17 01:12:57.198829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.198844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.198852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.198863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.198904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199033 | orchestrator | 2026-03-17 01:12:57.199037 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-17 01:12:57.199041 | orchestrator | Tuesday 17 March 2026 01:11:17 +0000 (0:00:10.659) 0:01:23.473 ********* 2026-03-17 01:12:57.199045 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.199049 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:12:57.199052 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:12:57.199056 | orchestrator | 2026-03-17 01:12:57.199060 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-03-17 01:12:57.199064 | orchestrator | Tuesday 17 March 2026 01:11:18 +0000 (0:00:01.579) 0:01:25.053 ********* 2026-03-17 01:12:57.199068 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.199075 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:12:57.199079 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:12:57.199083 | orchestrator | 2026-03-17 01:12:57.199087 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-17 01:12:57.199091 | orchestrator | Tuesday 17 March 2026 01:11:20 +0000 (0:00:01.478) 0:01:26.531 ********* 2026-03-17 01:12:57.199098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.199106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199118 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.199126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.199135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199147 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:12:57.199151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.199155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199175 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:12:57.199179 | orchestrator | 2026-03-17 01:12:57.199183 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-17 01:12:57.199186 | orchestrator | Tuesday 17 March 2026 01:11:21 +0000 (0:00:01.026) 0:01:27.558 ********* 2026-03-17 01:12:57.199190 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.199194 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:12:57.199198 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:12:57.199202 | orchestrator | 2026-03-17 01:12:57.199206 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-03-17 01:12:57.199210 | orchestrator | Tuesday 17 March 2026 01:11:21 +0000 (0:00:00.319) 0:01:27.877 ********* 2026-03-17 01:12:57.199214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.199218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.199229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:12:57.199234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-17 01:12:57.199278 | orchestrator | 2026-03-17 01:12:57.199282 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-03-17 01:12:57.199286 | orchestrator | Tuesday 17 March 2026 01:11:24 +0000 (0:00:02.982) 0:01:30.860 ********* 2026-03-17 01:12:57.199290 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:12:57.199294 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:12:57.199297 | orchestrator | } 2026-03-17 01:12:57.199301 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:12:57.199305 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:12:57.199309 | orchestrator | } 2026-03-17 01:12:57.199313 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:12:57.199317 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:12:57.199324 | orchestrator | } 2026-03-17 01:12:57.199328 | orchestrator | 2026-03-17 01:12:57.199332 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:12:57.199335 | orchestrator | Tuesday 17 March 2026 01:11:24 +0000 (0:00:00.317) 0:01:31.178 ********* 2026-03-17 01:12:57.199342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.199348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199361 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.199365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.199371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:12:57.199389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199401 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:12:57.199405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-17 01:12:57.199415 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:12:57.199419 | orchestrator | 2026-03-17 01:12:57.199425 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-17 01:12:57.199429 | orchestrator | Tuesday 17 March 2026 01:11:26 +0000 (0:00:01.263) 0:01:32.441 ********* 2026-03-17 01:12:57.199435 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.199441 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:12:57.199452 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:12:57.199458 | orchestrator | 2026-03-17 01:12:57.199464 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-17 01:12:57.199470 | orchestrator | Tuesday 17 March 2026 01:11:26 +0000 (0:00:00.271) 0:01:32.712 ********* 2026-03-17 01:12:57.199477 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.199483 | orchestrator | 2026-03-17 01:12:57.199488 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-17 01:12:57.199494 | orchestrator | Tuesday 17 March 2026 01:11:28 +0000 (0:00:02.573) 0:01:35.286 ********* 2026-03-17 01:12:57.199500 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.199506 | orchestrator | 2026-03-17 01:12:57.199512 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-17 01:12:57.199518 | orchestrator | Tuesday 17 March 2026 01:11:31 +0000 (0:00:02.582) 0:01:37.869 ********* 2026-03-17 01:12:57.199525 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.199532 | orchestrator | 2026-03-17 01:12:57.199538 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-17 01:12:57.199544 | orchestrator | Tuesday 17 March 2026 01:11:49 +0000 (0:00:18.207) 0:01:56.076 ********* 2026-03-17 01:12:57.199551 | orchestrator | 2026-03-17 01:12:57.199558 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-17 01:12:57.199564 | orchestrator | Tuesday 17 March 2026 01:11:49 +0000 (0:00:00.062) 0:01:56.139 ********* 2026-03-17 01:12:57.199571 | orchestrator | 2026-03-17 01:12:57.199577 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-17 01:12:57.199584 | orchestrator | Tuesday 17 March 2026 01:11:49 +0000 (0:00:00.062) 0:01:56.202 ********* 2026-03-17 01:12:57.199590 | orchestrator | 2026-03-17 01:12:57.199597 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-17 01:12:57.199604 | orchestrator | Tuesday 17 March 2026 01:11:50 +0000 (0:00:00.287) 0:01:56.489 ********* 2026-03-17 01:12:57.199615 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.199620 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:12:57.199624 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:12:57.199628 | orchestrator | 2026-03-17 01:12:57.199632 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-17 01:12:57.199635 | orchestrator | Tuesday 17 March 2026 01:12:08 +0000 (0:00:18.754) 0:02:15.244 ********* 2026-03-17 01:12:57.199639 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.199643 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:12:57.199647 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:12:57.199651 | orchestrator | 2026-03-17 01:12:57.199654 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-17 01:12:57.199659 | orchestrator | Tuesday 17 March 2026 01:12:18 +0000 (0:00:09.527) 0:02:24.771 ********* 2026-03-17 01:12:57.199664 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.199668 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:12:57.199672 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:12:57.199677 | orchestrator | 2026-03-17 01:12:57.199681 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-17 01:12:57.199686 | orchestrator | Tuesday 17 March 2026 01:12:43 +0000 (0:00:25.381) 0:02:50.152 ********* 2026-03-17 01:12:57.199690 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:12:57.199695 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:12:57.199699 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:12:57.199703 | orchestrator | 2026-03-17 01:12:57.199708 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-17 01:12:57.199712 | orchestrator | Tuesday 17 March 2026 01:12:54 +0000 (0:00:10.376) 0:03:00.529 ********* 2026-03-17 01:12:57.199717 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:12:57.199721 | orchestrator | 2026-03-17 01:12:57.199725 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:12:57.199730 | orchestrator | testbed-node-0 : ok=33  changed=24  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-17 01:12:57.199734 | orchestrator | testbed-node-1 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 01:12:57.199738 | orchestrator | testbed-node-2 : ok=24  changed=17  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-17 01:12:57.199743 | orchestrator | 2026-03-17 01:12:57.199747 | orchestrator | 2026-03-17 01:12:57.199751 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:12:57.199756 | orchestrator | Tuesday 17 March 2026 01:12:54 +0000 (0:00:00.354) 0:03:00.884 ********* 2026-03-17 01:12:57.199760 | orchestrator | =============================================================================== 2026-03-17 01:12:57.199764 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.38s 2026-03-17 01:12:57.199769 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 18.75s 2026-03-17 01:12:57.199773 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.21s 2026-03-17 01:12:57.199777 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 11.12s 2026-03-17 01:12:57.199782 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.66s 2026-03-17 01:12:57.199790 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.38s 2026-03-17 01:12:57.199794 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.53s 2026-03-17 01:12:57.199799 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 7.80s 2026-03-17 01:12:57.199806 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 7.16s 2026-03-17 01:12:57.199810 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 6.12s 2026-03-17 01:12:57.199818 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.38s 2026-03-17 01:12:57.199822 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.38s 2026-03-17 01:12:57.199827 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.20s 2026-03-17 01:12:57.199831 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.14s 2026-03-17 01:12:57.199835 | orchestrator | service-check-containers : cinder | Check containers -------------------- 2.98s 2026-03-17 01:12:57.199840 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.72s 2026-03-17 01:12:57.199844 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.64s 2026-03-17 01:12:57.199848 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.59s 2026-03-17 01:12:57.199853 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.58s 2026-03-17 01:12:57.199857 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.57s 2026-03-17 01:12:57.199861 | orchestrator | 2026-03-17 01:12:57 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:12:57.199866 | orchestrator | 2026-03-17 01:12:57 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:12:57.200146 | orchestrator | 2026-03-17 01:12:57 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:12:57.200494 | orchestrator | 2026-03-17 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:00.247612 | orchestrator | 2026-03-17 01:13:00 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:13:00.249298 | orchestrator | 2026-03-17 01:13:00 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:00.251473 | orchestrator | 2026-03-17 01:13:00 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:00.251515 | orchestrator | 2026-03-17 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:03.299073 | orchestrator | 2026-03-17 01:13:03 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:13:03.300663 | orchestrator | 2026-03-17 01:13:03 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:03.303955 | orchestrator | 2026-03-17 01:13:03 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:03.304507 | orchestrator | 2026-03-17 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:06.355675 | orchestrator | 2026-03-17 01:13:06 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state STARTED 2026-03-17 01:13:06.356265 | orchestrator | 2026-03-17 01:13:06 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:06.357546 | orchestrator | 2026-03-17 01:13:06 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:06.357593 | orchestrator | 2026-03-17 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:09.388059 | orchestrator | 2026-03-17 01:13:09 | INFO  | Task aebe2ce8-d238-4294-9e8b-6d98eddd8f98 is in state SUCCESS 2026-03-17 01:13:09.389009 | orchestrator | 2026-03-17 01:13:09.389036 | orchestrator | 2026-03-17 01:13:09.389042 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:13:09.389047 | orchestrator | 2026-03-17 01:13:09.389052 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:13:09.389057 | orchestrator | Tuesday 17 March 2026 01:11:43 +0000 (0:00:00.315) 0:00:00.315 ********* 2026-03-17 01:13:09.389062 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:13:09.389067 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:13:09.389083 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:13:09.389088 | orchestrator | 2026-03-17 01:13:09.389093 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:13:09.389097 | orchestrator | Tuesday 17 March 2026 01:11:43 +0000 (0:00:00.280) 0:00:00.595 ********* 2026-03-17 01:13:09.389101 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-17 01:13:09.389105 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-17 01:13:09.389109 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-17 01:13:09.389112 | orchestrator | 2026-03-17 01:13:09.389116 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-17 01:13:09.389120 | orchestrator | 2026-03-17 01:13:09.389124 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-17 01:13:09.389128 | orchestrator | Tuesday 17 March 2026 01:11:43 +0000 (0:00:00.316) 0:00:00.911 ********* 2026-03-17 01:13:09.389132 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:13:09.389136 | orchestrator | 2026-03-17 01:13:09.389140 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-17 01:13:09.389150 | orchestrator | Tuesday 17 March 2026 01:11:44 +0000 (0:00:00.625) 0:00:01.537 ********* 2026-03-17 01:13:09.389156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389170 | orchestrator | 2026-03-17 01:13:09.389174 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-17 01:13:09.389178 | orchestrator | Tuesday 17 March 2026 01:11:45 +0000 (0:00:00.973) 0:00:02.510 ********* 2026-03-17 01:13:09.389182 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:13:09.389186 | orchestrator | 2026-03-17 01:13:09.389190 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-17 01:13:09.389199 | orchestrator | Tuesday 17 March 2026 01:11:46 +0000 (0:00:00.875) 0:00:03.386 ********* 2026-03-17 01:13:09.389203 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:13:09.389206 | orchestrator | 2026-03-17 01:13:09.389210 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-17 01:13:09.389220 | orchestrator | Tuesday 17 March 2026 01:11:46 +0000 (0:00:00.497) 0:00:03.884 ********* 2026-03-17 01:13:09.389224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389239 | orchestrator | 2026-03-17 01:13:09.389243 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-17 01:13:09.389247 | orchestrator | Tuesday 17 March 2026 01:11:47 +0000 (0:00:01.336) 0:00:05.221 ********* 2026-03-17 01:13:09.389251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:13:09.389255 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:13:09.389260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:13:09.389266 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:13:09.389273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:13:09.389277 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:13:09.389281 | orchestrator | 2026-03-17 01:13:09.389285 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-17 01:13:09.389289 | orchestrator | Tuesday 17 March 2026 01:11:48 +0000 (0:00:00.441) 0:00:05.663 ********* 2026-03-17 01:13:09.389295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:13:09.389299 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:13:09.389303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:13:09.389307 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:13:09.389311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:13:09.389317 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:13:09.389321 | orchestrator | 2026-03-17 01:13:09.389325 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-17 01:13:09.389329 | orchestrator | Tuesday 17 March 2026 01:11:49 +0000 (0:00:00.604) 0:00:06.268 ********* 2026-03-17 01:13:09.389335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389350 | orchestrator | 2026-03-17 01:13:09.389354 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-17 01:13:09.389357 | orchestrator | Tuesday 17 March 2026 01:11:50 +0000 (0:00:01.296) 0:00:07.564 ********* 2026-03-17 01:13:09.389361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389376 | orchestrator | 2026-03-17 01:13:09.389380 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-17 01:13:09.389386 | orchestrator | Tuesday 17 March 2026 01:11:52 +0000 (0:00:01.971) 0:00:09.535 ********* 2026-03-17 01:13:09.389390 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:13:09.389394 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:13:09.389398 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:13:09.389401 | orchestrator | 2026-03-17 01:13:09.389405 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-17 01:13:09.389409 | orchestrator | Tuesday 17 March 2026 01:11:52 +0000 (0:00:00.266) 0:00:09.802 ********* 2026-03-17 01:13:09.389415 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-17 01:13:09.389421 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-17 01:13:09.389428 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-17 01:13:09.389434 | orchestrator | 2026-03-17 01:13:09.389440 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-17 01:13:09.389446 | orchestrator | Tuesday 17 March 2026 01:11:53 +0000 (0:00:01.220) 0:00:11.022 ********* 2026-03-17 01:13:09.389452 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-17 01:13:09.389460 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-17 01:13:09.389466 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-17 01:13:09.389473 | orchestrator | 2026-03-17 01:13:09.389483 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-03-17 01:13:09.389490 | orchestrator | Tuesday 17 March 2026 01:11:55 +0000 (0:00:01.323) 0:00:12.346 ********* 2026-03-17 01:13:09.389494 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:13:09.389500 | orchestrator | 2026-03-17 01:13:09.389506 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-03-17 01:13:09.389512 | orchestrator | Tuesday 17 March 2026 01:11:56 +0000 (0:00:00.900) 0:00:13.246 ********* 2026-03-17 01:13:09.389519 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:13:09.389525 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:13:09.389532 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:13:09.389539 | orchestrator | 2026-03-17 01:13:09.389545 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-17 01:13:09.389557 | orchestrator | Tuesday 17 March 2026 01:11:56 +0000 (0:00:00.671) 0:00:13.917 ********* 2026-03-17 01:13:09.389564 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:13:09.389570 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:13:09.389585 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:13:09.389650 | orchestrator | 2026-03-17 01:13:09.389700 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-03-17 01:13:09.389709 | orchestrator | Tuesday 17 March 2026 01:11:57 +0000 (0:00:01.187) 0:00:15.105 ********* 2026-03-17 01:13:09.389717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:13:09.389745 | orchestrator | 2026-03-17 01:13:09.389751 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-03-17 01:13:09.389758 | orchestrator | Tuesday 17 March 2026 01:11:58 +0000 (0:00:00.952) 0:00:16.057 ********* 2026-03-17 01:13:09.389765 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:13:09.389777 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:13:09.389784 | orchestrator | } 2026-03-17 01:13:09.389791 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:13:09.389797 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:13:09.389804 | orchestrator | } 2026-03-17 01:13:09.389810 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:13:09.389816 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:13:09.389823 | orchestrator | } 2026-03-17 01:13:09.389829 | orchestrator | 2026-03-17 01:13:09.389836 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:13:09.389842 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:00.299) 0:00:16.357 ********* 2026-03-17 01:13:09.389853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:13:09.389865 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:13:09.389872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:13:09.389879 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:13:09.389885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:13:09.389892 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:13:09.389899 | orchestrator | 2026-03-17 01:13:09.389905 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-17 01:13:09.389965 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:00.714) 0:00:17.072 ********* 2026-03-17 01:13:09.389973 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:13:09.389979 | orchestrator | 2026-03-17 01:13:09.389986 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-17 01:13:09.389993 | orchestrator | Tuesday 17 March 2026 01:12:02 +0000 (0:00:02.158) 0:00:19.230 ********* 2026-03-17 01:13:09.389999 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:13:09.390006 | orchestrator | 2026-03-17 01:13:09.390044 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-17 01:13:09.390051 | orchestrator | Tuesday 17 March 2026 01:12:04 +0000 (0:00:02.154) 0:00:21.384 ********* 2026-03-17 01:13:09.390057 | orchestrator | 2026-03-17 01:13:09.390064 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-17 01:13:09.390070 | orchestrator | Tuesday 17 March 2026 01:12:04 +0000 (0:00:00.056) 0:00:21.441 ********* 2026-03-17 01:13:09.390077 | orchestrator | 2026-03-17 01:13:09.390083 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-17 01:13:09.390094 | orchestrator | Tuesday 17 March 2026 01:12:04 +0000 (0:00:00.056) 0:00:21.498 ********* 2026-03-17 01:13:09.390100 | orchestrator | 2026-03-17 01:13:09.390107 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-17 01:13:09.390113 | orchestrator | Tuesday 17 March 2026 01:12:04 +0000 (0:00:00.063) 0:00:21.561 ********* 2026-03-17 01:13:09.390124 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:13:09.390131 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:13:09.390137 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:13:09.390144 | orchestrator | 2026-03-17 01:13:09.390150 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-17 01:13:09.390157 | orchestrator | Tuesday 17 March 2026 01:12:11 +0000 (0:00:06.798) 0:00:28.359 ********* 2026-03-17 01:13:09.390163 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:13:09.390170 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:13:09.390176 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-17 01:13:09.390183 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:13:09.390189 | orchestrator | 2026-03-17 01:13:09.390196 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-17 01:13:09.390202 | orchestrator | Tuesday 17 March 2026 01:12:25 +0000 (0:00:14.049) 0:00:42.409 ********* 2026-03-17 01:13:09.390209 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:13:09.390215 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:13:09.390221 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:13:09.390228 | orchestrator | 2026-03-17 01:13:09.390234 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-17 01:13:09.390241 | orchestrator | Tuesday 17 March 2026 01:13:01 +0000 (0:00:36.164) 0:01:18.573 ********* 2026-03-17 01:13:09.390247 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:13:09.390254 | orchestrator | 2026-03-17 01:13:09.390267 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-17 01:13:09.390273 | orchestrator | Tuesday 17 March 2026 01:13:03 +0000 (0:00:01.944) 0:01:20.518 ********* 2026-03-17 01:13:09.390280 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:13:09.390286 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:13:09.390293 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:13:09.390300 | orchestrator | 2026-03-17 01:13:09.390306 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-17 01:13:09.390313 | orchestrator | Tuesday 17 March 2026 01:13:03 +0000 (0:00:00.286) 0:01:20.804 ********* 2026-03-17 01:13:09.390321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-17 01:13:09.390329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-17 01:13:09.390337 | orchestrator | 2026-03-17 01:13:09.390344 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-17 01:13:09.390350 | orchestrator | Tuesday 17 March 2026 01:13:05 +0000 (0:00:02.315) 0:01:23.120 ********* 2026-03-17 01:13:09.390357 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:13:09.390363 | orchestrator | 2026-03-17 01:13:09.390369 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:13:09.390377 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:13:09.390384 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:13:09.390390 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-17 01:13:09.390397 | orchestrator | 2026-03-17 01:13:09.390403 | orchestrator | 2026-03-17 01:13:09.390410 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:13:09.390420 | orchestrator | Tuesday 17 March 2026 01:13:06 +0000 (0:00:00.395) 0:01:23.516 ********* 2026-03-17 01:13:09.390426 | orchestrator | =============================================================================== 2026-03-17 01:13:09.390433 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.16s 2026-03-17 01:13:09.390440 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 14.05s 2026-03-17 01:13:09.390444 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.80s 2026-03-17 01:13:09.390449 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.32s 2026-03-17 01:13:09.390455 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.16s 2026-03-17 01:13:09.390461 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.15s 2026-03-17 01:13:09.390467 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.97s 2026-03-17 01:13:09.390474 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 1.94s 2026-03-17 01:13:09.390480 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2026-03-17 01:13:09.390486 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.32s 2026-03-17 01:13:09.390496 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.30s 2026-03-17 01:13:09.390502 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2026-03-17 01:13:09.390509 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.19s 2026-03-17 01:13:09.390515 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.97s 2026-03-17 01:13:09.390522 | orchestrator | service-check-containers : grafana | Check containers ------------------- 0.95s 2026-03-17 01:13:09.390528 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.90s 2026-03-17 01:13:09.390534 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.88s 2026-03-17 01:13:09.390541 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.71s 2026-03-17 01:13:09.390547 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 0.67s 2026-03-17 01:13:09.390553 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.63s 2026-03-17 01:13:09.390560 | orchestrator | 2026-03-17 01:13:09 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:09.390565 | orchestrator | 2026-03-17 01:13:09 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:09.390569 | orchestrator | 2026-03-17 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:12.436337 | orchestrator | 2026-03-17 01:13:12 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:12.437776 | orchestrator | 2026-03-17 01:13:12 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:12.438147 | orchestrator | 2026-03-17 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:15.476956 | orchestrator | 2026-03-17 01:13:15 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:15.479359 | orchestrator | 2026-03-17 01:13:15 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:15.479403 | orchestrator | 2026-03-17 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:18.522506 | orchestrator | 2026-03-17 01:13:18 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:18.523811 | orchestrator | 2026-03-17 01:13:18 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:18.523850 | orchestrator | 2026-03-17 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:21.567846 | orchestrator | 2026-03-17 01:13:21 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:21.570078 | orchestrator | 2026-03-17 01:13:21 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:21.570119 | orchestrator | 2026-03-17 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:24.608983 | orchestrator | 2026-03-17 01:13:24 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:24.611216 | orchestrator | 2026-03-17 01:13:24 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:24.611287 | orchestrator | 2026-03-17 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:27.655946 | orchestrator | 2026-03-17 01:13:27 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:27.657967 | orchestrator | 2026-03-17 01:13:27 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:27.658055 | orchestrator | 2026-03-17 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:30.702771 | orchestrator | 2026-03-17 01:13:30 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:30.704622 | orchestrator | 2026-03-17 01:13:30 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:30.704952 | orchestrator | 2026-03-17 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:33.755674 | orchestrator | 2026-03-17 01:13:33 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:33.758262 | orchestrator | 2026-03-17 01:13:33 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:33.758324 | orchestrator | 2026-03-17 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:36.806766 | orchestrator | 2026-03-17 01:13:36 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:36.809354 | orchestrator | 2026-03-17 01:13:36 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:36.809400 | orchestrator | 2026-03-17 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:39.839270 | orchestrator | 2026-03-17 01:13:39 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:39.840347 | orchestrator | 2026-03-17 01:13:39 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:39.840421 | orchestrator | 2026-03-17 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:42.879553 | orchestrator | 2026-03-17 01:13:42 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:42.881300 | orchestrator | 2026-03-17 01:13:42 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:42.881340 | orchestrator | 2026-03-17 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:45.922197 | orchestrator | 2026-03-17 01:13:45 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:45.924680 | orchestrator | 2026-03-17 01:13:45 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:45.924742 | orchestrator | 2026-03-17 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:48.965144 | orchestrator | 2026-03-17 01:13:48 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:48.966789 | orchestrator | 2026-03-17 01:13:48 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:48.966846 | orchestrator | 2026-03-17 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:52.014901 | orchestrator | 2026-03-17 01:13:52 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:52.017736 | orchestrator | 2026-03-17 01:13:52 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:52.017799 | orchestrator | 2026-03-17 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:55.060765 | orchestrator | 2026-03-17 01:13:55 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:13:55.063148 | orchestrator | 2026-03-17 01:13:55 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state STARTED 2026-03-17 01:13:55.063199 | orchestrator | 2026-03-17 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:13:58.102790 | orchestrator | 2026-03-17 01:13:58 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:15:58.210292 | orchestrator | 2026-03-17 01:15:58 | INFO  | Task 7c53ef4c-b888-4f2b-988f-e1135f7b82c9 is in state SUCCESS 2026-03-17 01:15:58.210377 | orchestrator | 2026-03-17 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:01.254305 | orchestrator | 2026-03-17 01:16:01 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:01.256644 | orchestrator | 2026-03-17 01:16:01 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:01.256706 | orchestrator | 2026-03-17 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:04.304005 | orchestrator | 2026-03-17 01:16:04 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:04.305523 | orchestrator | 2026-03-17 01:16:04 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:04.305806 | orchestrator | 2026-03-17 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:07.345311 | orchestrator | 2026-03-17 01:16:07 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:07.345401 | orchestrator | 2026-03-17 01:16:07 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:07.345410 | orchestrator | 2026-03-17 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:10.379221 | orchestrator | 2026-03-17 01:16:10 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:10.380018 | orchestrator | 2026-03-17 01:16:10 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:10.380087 | orchestrator | 2026-03-17 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:13.413843 | orchestrator | 2026-03-17 01:16:13 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:13.415936 | orchestrator | 2026-03-17 01:16:13 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:13.416425 | orchestrator | 2026-03-17 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:16.458790 | orchestrator | 2026-03-17 01:16:16 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:16.461251 | orchestrator | 2026-03-17 01:16:16 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:16.461856 | orchestrator | 2026-03-17 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:19.490778 | orchestrator | 2026-03-17 01:16:19 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:19.491466 | orchestrator | 2026-03-17 01:16:19 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:19.491505 | orchestrator | 2026-03-17 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:22.540243 | orchestrator | 2026-03-17 01:16:22 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:22.543046 | orchestrator | 2026-03-17 01:16:22 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:22.543108 | orchestrator | 2026-03-17 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:25.574159 | orchestrator | 2026-03-17 01:16:25 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:25.575982 | orchestrator | 2026-03-17 01:16:25 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:25.576006 | orchestrator | 2026-03-17 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:28.599885 | orchestrator | 2026-03-17 01:16:28 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:28.600513 | orchestrator | 2026-03-17 01:16:28 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:28.600544 | orchestrator | 2026-03-17 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:31.633412 | orchestrator | 2026-03-17 01:16:31 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:31.633510 | orchestrator | 2026-03-17 01:16:31 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:31.633521 | orchestrator | 2026-03-17 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:34.672152 | orchestrator | 2026-03-17 01:16:34 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:34.674331 | orchestrator | 2026-03-17 01:16:34 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:34.674390 | orchestrator | 2026-03-17 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:37.711967 | orchestrator | 2026-03-17 01:16:37 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:37.712816 | orchestrator | 2026-03-17 01:16:37 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:37.712842 | orchestrator | 2026-03-17 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:40.750239 | orchestrator | 2026-03-17 01:16:40 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:40.750322 | orchestrator | 2026-03-17 01:16:40 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:40.750329 | orchestrator | 2026-03-17 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:43.784577 | orchestrator | 2026-03-17 01:16:43 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:43.787145 | orchestrator | 2026-03-17 01:16:43 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:43.787190 | orchestrator | 2026-03-17 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:46.824941 | orchestrator | 2026-03-17 01:16:46 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:46.827015 | orchestrator | 2026-03-17 01:16:46 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:46.827080 | orchestrator | 2026-03-17 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:49.868704 | orchestrator | 2026-03-17 01:16:49 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:49.871337 | orchestrator | 2026-03-17 01:16:49 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:49.871427 | orchestrator | 2026-03-17 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:52.928266 | orchestrator | 2026-03-17 01:16:52 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:52.931554 | orchestrator | 2026-03-17 01:16:52 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:52.931936 | orchestrator | 2026-03-17 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:55.973112 | orchestrator | 2026-03-17 01:16:55 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:55.975212 | orchestrator | 2026-03-17 01:16:55 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:55.975251 | orchestrator | 2026-03-17 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:16:59.023907 | orchestrator | 2026-03-17 01:16:59 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:16:59.024692 | orchestrator | 2026-03-17 01:16:59 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:16:59.024737 | orchestrator | 2026-03-17 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:02.065527 | orchestrator | 2026-03-17 01:17:02 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:02.065849 | orchestrator | 2026-03-17 01:17:02 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:02.065887 | orchestrator | 2026-03-17 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:05.106839 | orchestrator | 2026-03-17 01:17:05 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:05.108446 | orchestrator | 2026-03-17 01:17:05 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:05.108676 | orchestrator | 2026-03-17 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:08.150702 | orchestrator | 2026-03-17 01:17:08 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:08.152692 | orchestrator | 2026-03-17 01:17:08 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:08.152740 | orchestrator | 2026-03-17 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:11.196526 | orchestrator | 2026-03-17 01:17:11 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:11.200115 | orchestrator | 2026-03-17 01:17:11 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:11.201022 | orchestrator | 2026-03-17 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:14.240750 | orchestrator | 2026-03-17 01:17:14 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:14.241696 | orchestrator | 2026-03-17 01:17:14 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:14.241859 | orchestrator | 2026-03-17 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:17.275679 | orchestrator | 2026-03-17 01:17:17 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:17.275898 | orchestrator | 2026-03-17 01:17:17 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:17.275927 | orchestrator | 2026-03-17 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:20.318122 | orchestrator | 2026-03-17 01:17:20 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:20.319212 | orchestrator | 2026-03-17 01:17:20 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:20.319304 | orchestrator | 2026-03-17 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:23.369284 | orchestrator | 2026-03-17 01:17:23 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:23.372481 | orchestrator | 2026-03-17 01:17:23 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:23.372535 | orchestrator | 2026-03-17 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:26.415975 | orchestrator | 2026-03-17 01:17:26 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:26.416410 | orchestrator | 2026-03-17 01:17:26 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:26.416462 | orchestrator | 2026-03-17 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:29.469449 | orchestrator | 2026-03-17 01:17:29 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:29.469730 | orchestrator | 2026-03-17 01:17:29 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:29.469812 | orchestrator | 2026-03-17 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:32.519013 | orchestrator | 2026-03-17 01:17:32 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:32.519759 | orchestrator | 2026-03-17 01:17:32 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:32.520377 | orchestrator | 2026-03-17 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:35.549666 | orchestrator | 2026-03-17 01:17:35 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:35.550320 | orchestrator | 2026-03-17 01:17:35 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:35.550357 | orchestrator | 2026-03-17 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:38.579689 | orchestrator | 2026-03-17 01:17:38 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:38.580384 | orchestrator | 2026-03-17 01:17:38 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:38.580440 | orchestrator | 2026-03-17 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:41.619822 | orchestrator | 2026-03-17 01:17:41 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:41.622746 | orchestrator | 2026-03-17 01:17:41 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:41.622875 | orchestrator | 2026-03-17 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:44.660979 | orchestrator | 2026-03-17 01:17:44 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:44.662637 | orchestrator | 2026-03-17 01:17:44 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:44.662973 | orchestrator | 2026-03-17 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:47.701440 | orchestrator | 2026-03-17 01:17:47 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:47.705062 | orchestrator | 2026-03-17 01:17:47 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:47.705114 | orchestrator | 2026-03-17 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:50.748693 | orchestrator | 2026-03-17 01:17:50 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:50.752345 | orchestrator | 2026-03-17 01:17:50 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:50.754494 | orchestrator | 2026-03-17 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:53.795265 | orchestrator | 2026-03-17 01:17:53 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:53.797457 | orchestrator | 2026-03-17 01:17:53 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:53.797914 | orchestrator | 2026-03-17 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:56.836397 | orchestrator | 2026-03-17 01:17:56 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:56.838244 | orchestrator | 2026-03-17 01:17:56 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:56.838296 | orchestrator | 2026-03-17 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:17:59.875230 | orchestrator | 2026-03-17 01:17:59 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:17:59.877197 | orchestrator | 2026-03-17 01:17:59 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:17:59.877286 | orchestrator | 2026-03-17 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:02.913551 | orchestrator | 2026-03-17 01:18:02 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:02.914332 | orchestrator | 2026-03-17 01:18:02 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:02.914540 | orchestrator | 2026-03-17 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:05.955791 | orchestrator | 2026-03-17 01:18:05 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:05.956616 | orchestrator | 2026-03-17 01:18:05 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:05.956651 | orchestrator | 2026-03-17 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:08.992730 | orchestrator | 2026-03-17 01:18:08 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:08.995772 | orchestrator | 2026-03-17 01:18:08 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:08.995835 | orchestrator | 2026-03-17 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:12.053647 | orchestrator | 2026-03-17 01:18:12 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:12.057953 | orchestrator | 2026-03-17 01:18:12 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:12.058011 | orchestrator | 2026-03-17 01:18:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:15.103607 | orchestrator | 2026-03-17 01:18:15 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:15.106206 | orchestrator | 2026-03-17 01:18:15 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:15.106574 | orchestrator | 2026-03-17 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:18.152149 | orchestrator | 2026-03-17 01:18:18 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:18.153483 | orchestrator | 2026-03-17 01:18:18 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:18.153546 | orchestrator | 2026-03-17 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:21.194129 | orchestrator | 2026-03-17 01:18:21 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:21.197435 | orchestrator | 2026-03-17 01:18:21 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:21.197619 | orchestrator | 2026-03-17 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:24.243938 | orchestrator | 2026-03-17 01:18:24 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:24.246293 | orchestrator | 2026-03-17 01:18:24 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:24.246351 | orchestrator | 2026-03-17 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:27.290511 | orchestrator | 2026-03-17 01:18:27 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:27.290747 | orchestrator | 2026-03-17 01:18:27 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:27.290771 | orchestrator | 2026-03-17 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:30.326341 | orchestrator | 2026-03-17 01:18:30 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:30.330709 | orchestrator | 2026-03-17 01:18:30 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:30.330768 | orchestrator | 2026-03-17 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:33.365414 | orchestrator | 2026-03-17 01:18:33 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:33.367816 | orchestrator | 2026-03-17 01:18:33 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:33.367988 | orchestrator | 2026-03-17 01:18:33 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:36.415258 | orchestrator | 2026-03-17 01:18:36 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:36.417843 | orchestrator | 2026-03-17 01:18:36 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:36.417891 | orchestrator | 2026-03-17 01:18:36 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:39.464590 | orchestrator | 2026-03-17 01:18:39 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:39.466335 | orchestrator | 2026-03-17 01:18:39 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:39.466394 | orchestrator | 2026-03-17 01:18:39 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:42.518771 | orchestrator | 2026-03-17 01:18:42 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:42.520808 | orchestrator | 2026-03-17 01:18:42 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:42.520868 | orchestrator | 2026-03-17 01:18:42 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:45.564513 | orchestrator | 2026-03-17 01:18:45 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:45.565774 | orchestrator | 2026-03-17 01:18:45 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:45.566134 | orchestrator | 2026-03-17 01:18:45 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:48.597714 | orchestrator | 2026-03-17 01:18:48 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:48.598115 | orchestrator | 2026-03-17 01:18:48 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:48.598130 | orchestrator | 2026-03-17 01:18:48 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:51.620158 | orchestrator | 2026-03-17 01:18:51 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:51.620536 | orchestrator | 2026-03-17 01:18:51 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:51.620558 | orchestrator | 2026-03-17 01:18:51 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:54.642602 | orchestrator | 2026-03-17 01:18:54 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:54.645413 | orchestrator | 2026-03-17 01:18:54 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:54.645467 | orchestrator | 2026-03-17 01:18:54 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:18:57.683653 | orchestrator | 2026-03-17 01:18:57 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:18:57.684318 | orchestrator | 2026-03-17 01:18:57 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:18:57.684346 | orchestrator | 2026-03-17 01:18:57 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:00.718844 | orchestrator | 2026-03-17 01:19:00 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:00.720694 | orchestrator | 2026-03-17 01:19:00 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:19:00.720828 | orchestrator | 2026-03-17 01:19:00 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:03.764948 | orchestrator | 2026-03-17 01:19:03 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:03.766811 | orchestrator | 2026-03-17 01:19:03 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:19:03.766867 | orchestrator | 2026-03-17 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:06.818051 | orchestrator | 2026-03-17 01:19:06 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:06.821264 | orchestrator | 2026-03-17 01:19:06 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:19:06.821953 | orchestrator | 2026-03-17 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:09.862269 | orchestrator | 2026-03-17 01:19:09 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:09.864366 | orchestrator | 2026-03-17 01:19:09 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:19:09.864410 | orchestrator | 2026-03-17 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:12.909352 | orchestrator | 2026-03-17 01:19:12 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:12.912479 | orchestrator | 2026-03-17 01:19:12 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:19:12.912558 | orchestrator | 2026-03-17 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:15.959839 | orchestrator | 2026-03-17 01:19:15 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:15.960768 | orchestrator | 2026-03-17 01:19:15 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:19:15.962033 | orchestrator | 2026-03-17 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:19.017076 | orchestrator | 2026-03-17 01:19:19 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:19.018882 | orchestrator | 2026-03-17 01:19:19 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:19:19.018957 | orchestrator | 2026-03-17 01:19:19 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:22.068448 | orchestrator | 2026-03-17 01:19:22 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:22.069683 | orchestrator | 2026-03-17 01:19:22 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state STARTED 2026-03-17 01:19:22.069745 | orchestrator | 2026-03-17 01:19:22 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:25.110435 | orchestrator | 2026-03-17 01:19:25 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:25.114757 | orchestrator | 2026-03-17 01:19:25 | INFO  | Task 12010c16-52db-4256-b0b5-2dbc63101cdb is in state SUCCESS 2026-03-17 01:19:25.116273 | orchestrator | 2026-03-17 01:19:25.116314 | orchestrator | 2026-03-17 01:19:25.116320 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:19:25.116325 | orchestrator | 2026-03-17 01:19:25.116329 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:19:25.116333 | orchestrator | Tuesday 17 March 2026 01:10:53 +0000 (0:00:00.400) 0:00:00.400 ********* 2026-03-17 01:19:25.116337 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.116342 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:19:25.116346 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:19:25.116350 | orchestrator | 2026-03-17 01:19:25.116376 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:19:25.116381 | orchestrator | Tuesday 17 March 2026 01:10:53 +0000 (0:00:00.577) 0:00:00.978 ********* 2026-03-17 01:19:25.116385 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-17 01:19:25.116389 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-17 01:19:25.116400 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-17 01:19:25.116404 | orchestrator | 2026-03-17 01:19:25.116408 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-17 01:19:25.116411 | orchestrator | 2026-03-17 01:19:25.116415 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-17 01:19:25.116419 | orchestrator | Tuesday 17 March 2026 01:10:54 +0000 (0:00:00.546) 0:00:01.524 ********* 2026-03-17 01:19:25.116423 | orchestrator | 2026-03-17 01:19:25.116426 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-17 01:19:25.116430 | orchestrator | 2026-03-17 01:19:25.116434 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-17 01:19:25.116438 | orchestrator | 2026-03-17 01:19:25.116442 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-17 01:19:25.116445 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.116449 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:19:25.116453 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:19:25.116457 | orchestrator | 2026-03-17 01:19:25.116461 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:19:25.116465 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:19:25.116469 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:19:25.116473 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:19:25.116487 | orchestrator | 2026-03-17 01:19:25.116491 | orchestrator | 2026-03-17 01:19:25.116495 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:19:25.116499 | orchestrator | Tuesday 17 March 2026 01:14:32 +0000 (0:03:38.471) 0:03:39.995 ********* 2026-03-17 01:19:25.116503 | orchestrator | =============================================================================== 2026-03-17 01:19:25.116507 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 218.47s 2026-03-17 01:19:25.116510 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.58s 2026-03-17 01:19:25.116514 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-03-17 01:19:25.116518 | orchestrator | 2026-03-17 01:19:25.116522 | orchestrator | 2026-03-17 01:19:25.116526 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:19:25.116530 | orchestrator | 2026-03-17 01:19:25.116534 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:19:25.116537 | orchestrator | Tuesday 17 March 2026 01:14:36 +0000 (0:00:00.300) 0:00:00.300 ********* 2026-03-17 01:19:25.116541 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.116545 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:19:25.116549 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:19:25.116552 | orchestrator | 2026-03-17 01:19:25.116556 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:19:25.116560 | orchestrator | Tuesday 17 March 2026 01:14:36 +0000 (0:00:00.277) 0:00:00.578 ********* 2026-03-17 01:19:25.116564 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-17 01:19:25.116567 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-17 01:19:25.116571 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-17 01:19:25.116575 | orchestrator | 2026-03-17 01:19:25.116579 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-17 01:19:25.116583 | orchestrator | 2026-03-17 01:19:25.116586 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:19:25.116590 | orchestrator | Tuesday 17 March 2026 01:14:37 +0000 (0:00:00.291) 0:00:00.869 ********* 2026-03-17 01:19:25.116594 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:25.116613 | orchestrator | 2026-03-17 01:19:25.116617 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-03-17 01:19:25.116621 | orchestrator | Tuesday 17 March 2026 01:14:37 +0000 (0:00:00.616) 0:00:01.486 ********* 2026-03-17 01:19:25.116625 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-17 01:19:25.116629 | orchestrator | 2026-03-17 01:19:25.116633 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-03-17 01:19:25.116637 | orchestrator | Tuesday 17 March 2026 01:14:41 +0000 (0:00:04.211) 0:00:05.697 ********* 2026-03-17 01:19:25.116640 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-17 01:19:25.116644 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-17 01:19:25.116666 | orchestrator | 2026-03-17 01:19:25.116670 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-17 01:19:25.116678 | orchestrator | Tuesday 17 March 2026 01:14:48 +0000 (0:00:06.484) 0:00:12.181 ********* 2026-03-17 01:19:25.116690 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:19:25.116702 | orchestrator | 2026-03-17 01:19:25.116709 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-17 01:19:25.116713 | orchestrator | Tuesday 17 March 2026 01:14:51 +0000 (0:00:03.111) 0:00:15.292 ********* 2026-03-17 01:19:25.116717 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-17 01:19:25.116721 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-17 01:19:25.116730 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:19:25.116734 | orchestrator | 2026-03-17 01:19:25.116737 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-17 01:19:25.116741 | orchestrator | Tuesday 17 March 2026 01:14:58 +0000 (0:00:06.916) 0:00:22.209 ********* 2026-03-17 01:19:25.116745 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:19:25.116749 | orchestrator | 2026-03-17 01:19:25.116759 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-03-17 01:19:25.116763 | orchestrator | Tuesday 17 March 2026 01:15:01 +0000 (0:00:02.793) 0:00:25.002 ********* 2026-03-17 01:19:25.116767 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-17 01:19:25.116771 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-17 01:19:25.116775 | orchestrator | 2026-03-17 01:19:25.116778 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-17 01:19:25.116791 | orchestrator | Tuesday 17 March 2026 01:15:07 +0000 (0:00:06.780) 0:00:31.783 ********* 2026-03-17 01:19:25.116795 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-17 01:19:25.116799 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-17 01:19:25.116802 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-17 01:19:25.116806 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-17 01:19:25.116810 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-17 01:19:25.116814 | orchestrator | 2026-03-17 01:19:25.116817 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:19:25.116821 | orchestrator | Tuesday 17 March 2026 01:15:23 +0000 (0:00:16.011) 0:00:47.795 ********* 2026-03-17 01:19:25.116825 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:25.116829 | orchestrator | 2026-03-17 01:19:25.116832 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-17 01:19:25.116836 | orchestrator | Tuesday 17 March 2026 01:15:24 +0000 (0:00:00.694) 0:00:48.489 ********* 2026-03-17 01:19:25.116840 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.116844 | orchestrator | 2026-03-17 01:19:25.116848 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-17 01:19:25.116851 | orchestrator | Tuesday 17 March 2026 01:15:30 +0000 (0:00:05.385) 0:00:53.875 ********* 2026-03-17 01:19:25.116855 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.116859 | orchestrator | 2026-03-17 01:19:25.116862 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-17 01:19:25.116866 | orchestrator | Tuesday 17 March 2026 01:15:34 +0000 (0:00:04.493) 0:00:58.368 ********* 2026-03-17 01:19:25.116870 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.116874 | orchestrator | 2026-03-17 01:19:25.116878 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-17 01:19:25.116883 | orchestrator | Tuesday 17 March 2026 01:15:38 +0000 (0:00:03.719) 0:01:02.087 ********* 2026-03-17 01:19:25.116887 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-17 01:19:25.116891 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-17 01:19:25.116896 | orchestrator | 2026-03-17 01:19:25.116900 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-17 01:19:25.116904 | orchestrator | Tuesday 17 March 2026 01:15:49 +0000 (0:00:11.228) 0:01:13.316 ********* 2026-03-17 01:19:25.116908 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-17 01:19:25.116913 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-17 01:19:25.116918 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-17 01:19:25.116926 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-17 01:19:25.116930 | orchestrator | 2026-03-17 01:19:25.116934 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-17 01:19:25.116938 | orchestrator | Tuesday 17 March 2026 01:16:04 +0000 (0:00:15.442) 0:01:28.758 ********* 2026-03-17 01:19:25.116943 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.116947 | orchestrator | 2026-03-17 01:19:25.116951 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-17 01:19:25.116955 | orchestrator | Tuesday 17 March 2026 01:16:10 +0000 (0:00:05.630) 0:01:34.389 ********* 2026-03-17 01:19:25.116960 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.116964 | orchestrator | 2026-03-17 01:19:25.116968 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-17 01:19:25.116973 | orchestrator | Tuesday 17 March 2026 01:16:15 +0000 (0:00:05.360) 0:01:39.750 ********* 2026-03-17 01:19:25.116977 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:25.116982 | orchestrator | 2026-03-17 01:19:25.116987 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-17 01:19:25.116994 | orchestrator | Tuesday 17 March 2026 01:16:16 +0000 (0:00:00.202) 0:01:39.952 ********* 2026-03-17 01:19:25.116998 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.117002 | orchestrator | 2026-03-17 01:19:25.117006 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:19:25.117009 | orchestrator | Tuesday 17 March 2026 01:16:20 +0000 (0:00:04.311) 0:01:44.264 ********* 2026-03-17 01:19:25.117013 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-03-17 01:19:25.117017 | orchestrator | 2026-03-17 01:19:25.117021 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-17 01:19:25.117024 | orchestrator | Tuesday 17 March 2026 01:16:21 +0000 (0:00:00.817) 0:01:45.082 ********* 2026-03-17 01:19:25.117028 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.117032 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.117035 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.117039 | orchestrator | 2026-03-17 01:19:25.117045 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-17 01:19:25.117049 | orchestrator | Tuesday 17 March 2026 01:16:27 +0000 (0:00:05.752) 0:01:50.834 ********* 2026-03-17 01:19:25.117052 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.117056 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.117060 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.117063 | orchestrator | 2026-03-17 01:19:25.117067 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-17 01:19:25.117071 | orchestrator | Tuesday 17 March 2026 01:16:32 +0000 (0:00:05.394) 0:01:56.228 ********* 2026-03-17 01:19:25.117075 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.117078 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.117082 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.117086 | orchestrator | 2026-03-17 01:19:25.117089 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-17 01:19:25.117093 | orchestrator | Tuesday 17 March 2026 01:16:33 +0000 (0:00:00.908) 0:01:57.137 ********* 2026-03-17 01:19:25.117097 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.117101 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:19:25.117104 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:19:25.117108 | orchestrator | 2026-03-17 01:19:25.117112 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-17 01:19:25.117115 | orchestrator | Tuesday 17 March 2026 01:16:35 +0000 (0:00:01.797) 0:01:58.935 ********* 2026-03-17 01:19:25.117119 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.117123 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.117127 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.117133 | orchestrator | 2026-03-17 01:19:25.117136 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-17 01:19:25.117140 | orchestrator | Tuesday 17 March 2026 01:16:36 +0000 (0:00:01.251) 0:02:00.187 ********* 2026-03-17 01:19:25.117144 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.117147 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.117151 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.117155 | orchestrator | 2026-03-17 01:19:25.117159 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-17 01:19:25.117162 | orchestrator | Tuesday 17 March 2026 01:16:37 +0000 (0:00:01.145) 0:02:01.332 ********* 2026-03-17 01:19:25.117166 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.117170 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.117173 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.117177 | orchestrator | 2026-03-17 01:19:25.117181 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-17 01:19:25.117184 | orchestrator | Tuesday 17 March 2026 01:16:40 +0000 (0:00:02.578) 0:02:03.911 ********* 2026-03-17 01:19:25.117188 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.117192 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.117195 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.117199 | orchestrator | 2026-03-17 01:19:25.117203 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-17 01:19:25.117207 | orchestrator | Tuesday 17 March 2026 01:16:41 +0000 (0:00:01.899) 0:02:05.810 ********* 2026-03-17 01:19:25.117210 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.117214 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:19:25.117218 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:19:25.117221 | orchestrator | 2026-03-17 01:19:25.117225 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-17 01:19:25.117229 | orchestrator | Tuesday 17 March 2026 01:16:42 +0000 (0:00:00.579) 0:02:06.390 ********* 2026-03-17 01:19:25.117232 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.117236 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:19:25.117240 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:19:25.117243 | orchestrator | 2026-03-17 01:19:25.117247 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:19:25.117251 | orchestrator | Tuesday 17 March 2026 01:16:46 +0000 (0:00:03.517) 0:02:09.907 ********* 2026-03-17 01:19:25.117255 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:25.117258 | orchestrator | 2026-03-17 01:19:25.117262 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-17 01:19:25.117266 | orchestrator | Tuesday 17 March 2026 01:16:46 +0000 (0:00:00.843) 0:02:10.750 ********* 2026-03-17 01:19:25.117270 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.117273 | orchestrator | 2026-03-17 01:19:25.117277 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-17 01:19:25.117281 | orchestrator | Tuesday 17 March 2026 01:16:51 +0000 (0:00:04.289) 0:02:15.039 ********* 2026-03-17 01:19:25.117285 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.117289 | orchestrator | 2026-03-17 01:19:25.117292 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-17 01:19:25.117296 | orchestrator | Tuesday 17 March 2026 01:16:54 +0000 (0:00:03.405) 0:02:18.445 ********* 2026-03-17 01:19:25.117300 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-17 01:19:25.117304 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-17 01:19:25.117308 | orchestrator | 2026-03-17 01:19:25.117315 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-17 01:19:25.117325 | orchestrator | Tuesday 17 March 2026 01:17:01 +0000 (0:00:06.875) 0:02:25.320 ********* 2026-03-17 01:19:25.117331 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.117338 | orchestrator | 2026-03-17 01:19:25.117343 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-17 01:19:25.117388 | orchestrator | Tuesday 17 March 2026 01:17:05 +0000 (0:00:04.333) 0:02:29.654 ********* 2026-03-17 01:19:25.117396 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:25.117402 | orchestrator | ok: [testbed-node-1] 2026-03-17 01:19:25.117408 | orchestrator | ok: [testbed-node-2] 2026-03-17 01:19:25.117414 | orchestrator | 2026-03-17 01:19:25.117420 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-17 01:19:25.117426 | orchestrator | Tuesday 17 March 2026 01:17:06 +0000 (0:00:00.340) 0:02:29.994 ********* 2026-03-17 01:19:25.117438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.117446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.117453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.117460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.117472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.117486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.117493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117574 | orchestrator | 2026-03-17 01:19:25.117579 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-17 01:19:25.117585 | orchestrator | Tuesday 17 March 2026 01:17:08 +0000 (0:00:02.552) 0:02:32.547 ********* 2026-03-17 01:19:25.117591 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:25.117597 | orchestrator | 2026-03-17 01:19:25.117603 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-17 01:19:25.117609 | orchestrator | Tuesday 17 March 2026 01:17:08 +0000 (0:00:00.121) 0:02:32.669 ********* 2026-03-17 01:19:25.117616 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:25.117622 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:25.117628 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:25.117634 | orchestrator | 2026-03-17 01:19:25.117641 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-17 01:19:25.117647 | orchestrator | Tuesday 17 March 2026 01:17:09 +0000 (0:00:00.276) 0:02:32.945 ********* 2026-03-17 01:19:25.117654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.117673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.117684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.117690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.117697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.117703 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:25.117710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.117722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.117734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.117744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.117751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.117758 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:25.117765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.117771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.117788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.117799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.117808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.117815 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:25.117822 | orchestrator | 2026-03-17 01:19:25.117828 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:19:25.117834 | orchestrator | Tuesday 17 March 2026 01:17:09 +0000 (0:00:00.632) 0:02:33.578 ********* 2026-03-17 01:19:25.117841 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:25.117847 | orchestrator | 2026-03-17 01:19:25.117854 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-17 01:19:25.117860 | orchestrator | Tuesday 17 March 2026 01:17:10 +0000 (0:00:00.662) 0:02:34.241 ********* 2026-03-17 01:19:25.117867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.117875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.117886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.117894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.117901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.117905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.117910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.117956 | orchestrator | 2026-03-17 01:19:25.117960 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-17 01:19:25.117964 | orchestrator | Tuesday 17 March 2026 01:17:15 +0000 (0:00:04.780) 0:02:39.021 ********* 2026-03-17 01:19:25.117971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.117975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.117981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.117985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.117992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.117996 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:25.118000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.118004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.118011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.118069 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:25.118076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.118083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.118089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.118114 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:25.118120 | orchestrator | 2026-03-17 01:19:25.118125 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-17 01:19:25.118132 | orchestrator | Tuesday 17 March 2026 01:17:15 +0000 (0:00:00.668) 0:02:39.690 ********* 2026-03-17 01:19:25.118137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.118148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.118156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.118180 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:25.118189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.118200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.118207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.118226 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:25.118241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.118248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.118258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.118273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.118279 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:25.118286 | orchestrator | 2026-03-17 01:19:25.118293 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-17 01:19:25.118300 | orchestrator | Tuesday 17 March 2026 01:17:16 +0000 (0:00:00.997) 0:02:40.688 ********* 2026-03-17 01:19:25.118697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.118732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.118748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.118756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.118762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.118768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.118781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118860 | orchestrator | 2026-03-17 01:19:25.118867 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-17 01:19:25.118874 | orchestrator | Tuesday 17 March 2026 01:17:21 +0000 (0:00:04.848) 0:02:45.537 ********* 2026-03-17 01:19:25.118881 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-17 01:19:25.118885 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-17 01:19:25.118889 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-17 01:19:25.118893 | orchestrator | 2026-03-17 01:19:25.118897 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-17 01:19:25.118900 | orchestrator | Tuesday 17 March 2026 01:17:23 +0000 (0:00:01.964) 0:02:47.501 ********* 2026-03-17 01:19:25.118905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.118909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.118916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.118925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.118929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.118934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.118938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.118996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119010 | orchestrator | 2026-03-17 01:19:25.119016 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-17 01:19:25.119022 | orchestrator | Tuesday 17 March 2026 01:17:41 +0000 (0:00:17.959) 0:03:05.460 ********* 2026-03-17 01:19:25.119029 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.119033 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.119036 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.119040 | orchestrator | 2026-03-17 01:19:25.119044 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-17 01:19:25.119048 | orchestrator | Tuesday 17 March 2026 01:17:43 +0000 (0:00:01.862) 0:03:07.322 ********* 2026-03-17 01:19:25.119051 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-17 01:19:25.119055 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-17 01:19:25.119059 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-17 01:19:25.119065 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-17 01:19:25.119069 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-17 01:19:25.119073 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-17 01:19:25.119077 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-17 01:19:25.119081 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-17 01:19:25.119084 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-17 01:19:25.119088 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-17 01:19:25.119092 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-17 01:19:25.119095 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-17 01:19:25.119099 | orchestrator | 2026-03-17 01:19:25.119105 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-17 01:19:25.119109 | orchestrator | Tuesday 17 March 2026 01:17:47 +0000 (0:00:04.399) 0:03:11.722 ********* 2026-03-17 01:19:25.119112 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-17 01:19:25.119116 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-17 01:19:25.119120 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-17 01:19:25.119124 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-17 01:19:25.119128 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-17 01:19:25.119134 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-17 01:19:25.119142 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-17 01:19:25.119152 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-17 01:19:25.119158 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-17 01:19:25.119164 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-17 01:19:25.119170 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-17 01:19:25.119176 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-17 01:19:25.119182 | orchestrator | 2026-03-17 01:19:25.119188 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-17 01:19:25.119193 | orchestrator | Tuesday 17 March 2026 01:17:52 +0000 (0:00:04.600) 0:03:16.323 ********* 2026-03-17 01:19:25.119199 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-17 01:19:25.119204 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-17 01:19:25.119211 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-17 01:19:25.119217 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-17 01:19:25.119223 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-17 01:19:25.119229 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-17 01:19:25.119235 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-17 01:19:25.119242 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-17 01:19:25.119248 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-17 01:19:25.119259 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-17 01:19:25.119263 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-17 01:19:25.119267 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-17 01:19:25.119271 | orchestrator | 2026-03-17 01:19:25.119274 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-03-17 01:19:25.119278 | orchestrator | Tuesday 17 March 2026 01:17:57 +0000 (0:00:04.933) 0:03:21.256 ********* 2026-03-17 01:19:25.119282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.119291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.119301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-17 01:19:25.119308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.119318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.119329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-17 01:19:25.119335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:25.119421 | orchestrator | 2026-03-17 01:19:25.119425 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-03-17 01:19:25.119429 | orchestrator | Tuesday 17 March 2026 01:18:01 +0000 (0:00:03.764) 0:03:25.021 ********* 2026-03-17 01:19:25.119433 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:19:25.119440 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:25.119444 | orchestrator | } 2026-03-17 01:19:25.119448 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:19:25.119452 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:25.119458 | orchestrator | } 2026-03-17 01:19:25.119464 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:19:25.119471 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:25.119476 | orchestrator | } 2026-03-17 01:19:25.119482 | orchestrator | 2026-03-17 01:19:25.119488 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:19:25.119494 | orchestrator | Tuesday 17 March 2026 01:18:01 +0000 (0:00:00.486) 0:03:25.507 ********* 2026-03-17 01:19:25.119501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.119514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.119521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.119527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.119539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.119546 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:25.119555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.119567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.119574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.119580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.119587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.119593 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:25.119604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-17 01:19:25.119613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-17 01:19:25.119623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.119629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-17 01:19:25.119635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-17 01:19:25.119640 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:25.119646 | orchestrator | 2026-03-17 01:19:25.119652 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-17 01:19:25.119658 | orchestrator | Tuesday 17 March 2026 01:18:02 +0000 (0:00:00.866) 0:03:26.374 ********* 2026-03-17 01:19:25.119664 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:25.119669 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:25.119675 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:25.119681 | orchestrator | 2026-03-17 01:19:25.119687 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-17 01:19:25.119693 | orchestrator | Tuesday 17 March 2026 01:18:02 +0000 (0:00:00.289) 0:03:26.664 ********* 2026-03-17 01:19:25.119699 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.119705 | orchestrator | 2026-03-17 01:19:25.119712 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-17 01:19:25.119719 | orchestrator | Tuesday 17 March 2026 01:18:04 +0000 (0:00:02.024) 0:03:28.688 ********* 2026-03-17 01:19:25.119725 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.119730 | orchestrator | 2026-03-17 01:19:25.119736 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-17 01:19:25.119742 | orchestrator | Tuesday 17 March 2026 01:18:06 +0000 (0:00:02.054) 0:03:30.743 ********* 2026-03-17 01:19:25.119747 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.119753 | orchestrator | 2026-03-17 01:19:25.119758 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-17 01:19:25.119764 | orchestrator | Tuesday 17 March 2026 01:18:09 +0000 (0:00:02.473) 0:03:33.217 ********* 2026-03-17 01:19:25.119770 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.119775 | orchestrator | 2026-03-17 01:19:25.119786 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-17 01:19:25.119793 | orchestrator | Tuesday 17 March 2026 01:18:11 +0000 (0:00:02.263) 0:03:35.480 ********* 2026-03-17 01:19:25.119807 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.119813 | orchestrator | 2026-03-17 01:19:25.119819 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-17 01:19:25.119825 | orchestrator | Tuesday 17 March 2026 01:18:33 +0000 (0:00:21.449) 0:03:56.930 ********* 2026-03-17 01:19:25.119830 | orchestrator | 2026-03-17 01:19:25.119835 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-17 01:19:25.119842 | orchestrator | Tuesday 17 March 2026 01:18:33 +0000 (0:00:00.062) 0:03:56.992 ********* 2026-03-17 01:19:25.119848 | orchestrator | 2026-03-17 01:19:25.119854 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-17 01:19:25.119860 | orchestrator | Tuesday 17 March 2026 01:18:33 +0000 (0:00:00.059) 0:03:57.051 ********* 2026-03-17 01:19:25.119867 | orchestrator | 2026-03-17 01:19:25.119876 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-17 01:19:25.119883 | orchestrator | Tuesday 17 March 2026 01:18:33 +0000 (0:00:00.061) 0:03:57.113 ********* 2026-03-17 01:19:25.119889 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.119895 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.119901 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.119907 | orchestrator | 2026-03-17 01:19:25.119913 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-17 01:19:25.119919 | orchestrator | Tuesday 17 March 2026 01:18:47 +0000 (0:00:14.084) 0:04:11.197 ********* 2026-03-17 01:19:25.119925 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.119931 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.119938 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.119944 | orchestrator | 2026-03-17 01:19:25.119950 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-17 01:19:25.119956 | orchestrator | Tuesday 17 March 2026 01:18:59 +0000 (0:00:11.762) 0:04:22.960 ********* 2026-03-17 01:19:25.119963 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.119969 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.119975 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.119981 | orchestrator | 2026-03-17 01:19:25.119987 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-17 01:19:25.119993 | orchestrator | Tuesday 17 March 2026 01:19:04 +0000 (0:00:05.181) 0:04:28.142 ********* 2026-03-17 01:19:25.119999 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.120005 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.120011 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.120017 | orchestrator | 2026-03-17 01:19:25.120023 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-17 01:19:25.120029 | orchestrator | Tuesday 17 March 2026 01:19:12 +0000 (0:00:08.288) 0:04:36.430 ********* 2026-03-17 01:19:25.120034 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:25.120040 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:25.120045 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:25.120051 | orchestrator | 2026-03-17 01:19:25.120057 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:19:25.120063 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-17 01:19:25.120069 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:19:25.120075 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-17 01:19:25.120082 | orchestrator | 2026-03-17 01:19:25.120087 | orchestrator | 2026-03-17 01:19:25.120094 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:19:25.120100 | orchestrator | Tuesday 17 March 2026 01:19:22 +0000 (0:00:09.967) 0:04:46.398 ********* 2026-03-17 01:19:25.120120 | orchestrator | =============================================================================== 2026-03-17 01:19:25.120128 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.45s 2026-03-17 01:19:25.120134 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.96s 2026-03-17 01:19:25.120141 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.01s 2026-03-17 01:19:25.120148 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.44s 2026-03-17 01:19:25.120155 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.08s 2026-03-17 01:19:25.120161 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.76s 2026-03-17 01:19:25.120168 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.23s 2026-03-17 01:19:25.120174 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 9.97s 2026-03-17 01:19:25.120181 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.29s 2026-03-17 01:19:25.120188 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 6.92s 2026-03-17 01:19:25.120194 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.87s 2026-03-17 01:19:25.120200 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 6.78s 2026-03-17 01:19:25.120207 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 6.48s 2026-03-17 01:19:25.120213 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.75s 2026-03-17 01:19:25.120219 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.63s 2026-03-17 01:19:25.120225 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.39s 2026-03-17 01:19:25.120238 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.39s 2026-03-17 01:19:25.120245 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.36s 2026-03-17 01:19:25.120251 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.18s 2026-03-17 01:19:25.120257 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 4.93s 2026-03-17 01:19:25.120263 | orchestrator | 2026-03-17 01:19:25 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:28.176291 | orchestrator | 2026-03-17 01:19:28 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:28.176341 | orchestrator | 2026-03-17 01:19:28 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:31.209583 | orchestrator | 2026-03-17 01:19:31 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state STARTED 2026-03-17 01:19:31.209682 | orchestrator | 2026-03-17 01:19:31 | INFO  | Wait 1 second(s) until the next check 2026-03-17 01:19:34.263586 | orchestrator | 2026-03-17 01:19:34 | INFO  | Task 92551940-3e65-4dd1-b3de-bf3dba1824e7 is in state SUCCESS 2026-03-17 01:19:34.265586 | orchestrator | 2026-03-17 01:19:34.265780 | orchestrator | 2026-03-17 01:19:34.265798 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-17 01:19:34.265806 | orchestrator | 2026-03-17 01:19:34.265813 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-17 01:19:34.265819 | orchestrator | Tuesday 17 March 2026 01:10:22 +0000 (0:00:00.628) 0:00:00.628 ********* 2026-03-17 01:19:34.265826 | orchestrator | changed: [testbed-manager] 2026-03-17 01:19:34.265834 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.265840 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.265847 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.265854 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.266205 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.266216 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.266223 | orchestrator | 2026-03-17 01:19:34.266230 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-17 01:19:34.266690 | orchestrator | Tuesday 17 March 2026 01:10:23 +0000 (0:00:01.333) 0:00:01.962 ********* 2026-03-17 01:19:34.266719 | orchestrator | changed: [testbed-manager] 2026-03-17 01:19:34.266726 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.266732 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.266739 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.266745 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.266752 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.266759 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.266765 | orchestrator | 2026-03-17 01:19:34.266772 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-17 01:19:34.266779 | orchestrator | Tuesday 17 March 2026 01:10:24 +0000 (0:00:01.182) 0:00:03.144 ********* 2026-03-17 01:19:34.266786 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-17 01:19:34.266792 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-17 01:19:34.266797 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-17 01:19:34.266804 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-17 01:19:34.266811 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-17 01:19:34.266817 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-17 01:19:34.266824 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-17 01:19:34.266831 | orchestrator | 2026-03-17 01:19:34.266837 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-17 01:19:34.266844 | orchestrator | 2026-03-17 01:19:34.266850 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-17 01:19:34.266856 | orchestrator | Tuesday 17 March 2026 01:10:25 +0000 (0:00:00.535) 0:00:03.679 ********* 2026-03-17 01:19:34.266863 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:34.266870 | orchestrator | 2026-03-17 01:19:34.266877 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-17 01:19:34.266883 | orchestrator | Tuesday 17 March 2026 01:10:25 +0000 (0:00:00.742) 0:00:04.422 ********* 2026-03-17 01:19:34.266890 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-17 01:19:34.266897 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-17 01:19:34.266902 | orchestrator | 2026-03-17 01:19:34.266908 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-17 01:19:34.266914 | orchestrator | Tuesday 17 March 2026 01:10:30 +0000 (0:00:04.963) 0:00:09.385 ********* 2026-03-17 01:19:34.266920 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:19:34.266927 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-17 01:19:34.266934 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.266940 | orchestrator | 2026-03-17 01:19:34.266946 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-17 01:19:34.266954 | orchestrator | Tuesday 17 March 2026 01:10:34 +0000 (0:00:03.601) 0:00:12.986 ********* 2026-03-17 01:19:34.266961 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.266967 | orchestrator | 2026-03-17 01:19:34.266974 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-17 01:19:34.266980 | orchestrator | Tuesday 17 March 2026 01:10:35 +0000 (0:00:00.731) 0:00:13.718 ********* 2026-03-17 01:19:34.266986 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.266992 | orchestrator | 2026-03-17 01:19:34.266998 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-17 01:19:34.267005 | orchestrator | Tuesday 17 March 2026 01:10:36 +0000 (0:00:01.564) 0:00:15.282 ********* 2026-03-17 01:19:34.267011 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.267018 | orchestrator | 2026-03-17 01:19:34.267024 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:19:34.267032 | orchestrator | Tuesday 17 March 2026 01:10:39 +0000 (0:00:02.844) 0:00:18.127 ********* 2026-03-17 01:19:34.267054 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.267060 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.267067 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.267073 | orchestrator | 2026-03-17 01:19:34.267079 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-17 01:19:34.267086 | orchestrator | Tuesday 17 March 2026 01:10:39 +0000 (0:00:00.406) 0:00:18.533 ********* 2026-03-17 01:19:34.267092 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:34.267099 | orchestrator | 2026-03-17 01:19:34.267105 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-17 01:19:34.267112 | orchestrator | Tuesday 17 March 2026 01:11:10 +0000 (0:00:30.795) 0:00:49.329 ********* 2026-03-17 01:19:34.267119 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.267126 | orchestrator | 2026-03-17 01:19:34.267145 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-17 01:19:34.267152 | orchestrator | Tuesday 17 March 2026 01:11:25 +0000 (0:00:14.945) 0:01:04.274 ********* 2026-03-17 01:19:34.267158 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:34.267164 | orchestrator | 2026-03-17 01:19:34.267170 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-17 01:19:34.267175 | orchestrator | Tuesday 17 March 2026 01:11:39 +0000 (0:00:13.889) 0:01:18.164 ********* 2026-03-17 01:19:34.267221 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:34.267229 | orchestrator | 2026-03-17 01:19:34.267236 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-17 01:19:34.267242 | orchestrator | Tuesday 17 March 2026 01:11:40 +0000 (0:00:00.770) 0:01:18.934 ********* 2026-03-17 01:19:34.267248 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.267255 | orchestrator | 2026-03-17 01:19:34.267261 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:19:34.267267 | orchestrator | Tuesday 17 March 2026 01:11:41 +0000 (0:00:00.752) 0:01:19.687 ********* 2026-03-17 01:19:34.267274 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:34.267281 | orchestrator | 2026-03-17 01:19:34.267288 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-17 01:19:34.267294 | orchestrator | Tuesday 17 March 2026 01:11:41 +0000 (0:00:00.691) 0:01:20.379 ********* 2026-03-17 01:19:34.267301 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:34.267308 | orchestrator | 2026-03-17 01:19:34.267314 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-17 01:19:34.267321 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:17.837) 0:01:38.217 ********* 2026-03-17 01:19:34.267328 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.267334 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.267342 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.267348 | orchestrator | 2026-03-17 01:19:34.267354 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-17 01:19:34.267360 | orchestrator | 2026-03-17 01:19:34.267776 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-17 01:19:34.267798 | orchestrator | Tuesday 17 March 2026 01:11:59 +0000 (0:00:00.279) 0:01:38.496 ********* 2026-03-17 01:19:34.267805 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:34.267812 | orchestrator | 2026-03-17 01:19:34.267819 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-17 01:19:34.267825 | orchestrator | Tuesday 17 March 2026 01:12:00 +0000 (0:00:00.691) 0:01:39.187 ********* 2026-03-17 01:19:34.267831 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.267838 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.267846 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.267852 | orchestrator | 2026-03-17 01:19:34.267859 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-17 01:19:34.267865 | orchestrator | Tuesday 17 March 2026 01:12:02 +0000 (0:00:02.087) 0:01:41.275 ********* 2026-03-17 01:19:34.267885 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.267891 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.267898 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.267904 | orchestrator | 2026-03-17 01:19:34.267910 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-17 01:19:34.267917 | orchestrator | Tuesday 17 March 2026 01:12:04 +0000 (0:00:02.096) 0:01:43.371 ********* 2026-03-17 01:19:34.267924 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.267930 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.267937 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.267943 | orchestrator | 2026-03-17 01:19:34.267950 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-17 01:19:34.267957 | orchestrator | Tuesday 17 March 2026 01:12:05 +0000 (0:00:00.388) 0:01:43.760 ********* 2026-03-17 01:19:34.267964 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 01:19:34.267970 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.267977 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 01:19:34.267984 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.267990 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-17 01:19:34.267997 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-17 01:19:34.268003 | orchestrator | 2026-03-17 01:19:34.268010 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-17 01:19:34.268016 | orchestrator | Tuesday 17 March 2026 01:12:14 +0000 (0:00:09.724) 0:01:53.485 ********* 2026-03-17 01:19:34.268023 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.268030 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268037 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268044 | orchestrator | 2026-03-17 01:19:34.268051 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-17 01:19:34.268057 | orchestrator | Tuesday 17 March 2026 01:12:15 +0000 (0:00:00.309) 0:01:53.794 ********* 2026-03-17 01:19:34.268064 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-17 01:19:34.268071 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.268077 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-17 01:19:34.268083 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268089 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-17 01:19:34.268095 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268102 | orchestrator | 2026-03-17 01:19:34.268108 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-17 01:19:34.268115 | orchestrator | Tuesday 17 March 2026 01:12:16 +0000 (0:00:01.032) 0:01:54.826 ********* 2026-03-17 01:19:34.268122 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268129 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268135 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.268141 | orchestrator | 2026-03-17 01:19:34.268148 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-17 01:19:34.268155 | orchestrator | Tuesday 17 March 2026 01:12:16 +0000 (0:00:00.468) 0:01:55.294 ********* 2026-03-17 01:19:34.268172 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268179 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268185 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.268192 | orchestrator | 2026-03-17 01:19:34.268199 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-17 01:19:34.268206 | orchestrator | Tuesday 17 March 2026 01:12:17 +0000 (0:00:00.998) 0:01:56.293 ********* 2026-03-17 01:19:34.268212 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268218 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268700 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.268725 | orchestrator | 2026-03-17 01:19:34.268732 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-17 01:19:34.268749 | orchestrator | Tuesday 17 March 2026 01:12:19 +0000 (0:00:02.083) 0:01:58.376 ********* 2026-03-17 01:19:34.268756 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268762 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268773 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:34.268781 | orchestrator | 2026-03-17 01:19:34.268787 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-17 01:19:34.268793 | orchestrator | Tuesday 17 March 2026 01:12:43 +0000 (0:00:23.567) 0:02:21.944 ********* 2026-03-17 01:19:34.268799 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268805 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268811 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:34.268817 | orchestrator | 2026-03-17 01:19:34.268823 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-17 01:19:34.268830 | orchestrator | Tuesday 17 March 2026 01:12:55 +0000 (0:00:12.418) 0:02:34.363 ********* 2026-03-17 01:19:34.268836 | orchestrator | ok: [testbed-node-0] 2026-03-17 01:19:34.268843 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268849 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268855 | orchestrator | 2026-03-17 01:19:34.268862 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-17 01:19:34.268868 | orchestrator | Tuesday 17 March 2026 01:12:56 +0000 (0:00:00.809) 0:02:35.172 ********* 2026-03-17 01:19:34.268875 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268881 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268888 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.268895 | orchestrator | 2026-03-17 01:19:34.268902 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-17 01:19:34.268909 | orchestrator | Tuesday 17 March 2026 01:13:08 +0000 (0:00:11.810) 0:02:46.983 ********* 2026-03-17 01:19:34.268915 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.268921 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.268965 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.268972 | orchestrator | 2026-03-17 01:19:34.268978 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-17 01:19:34.268985 | orchestrator | Tuesday 17 March 2026 01:13:09 +0000 (0:00:01.236) 0:02:48.220 ********* 2026-03-17 01:19:34.269180 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.269197 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.269203 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.269210 | orchestrator | 2026-03-17 01:19:34.269216 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-17 01:19:34.269223 | orchestrator | 2026-03-17 01:19:34.269229 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:19:34.269235 | orchestrator | Tuesday 17 March 2026 01:13:09 +0000 (0:00:00.292) 0:02:48.512 ********* 2026-03-17 01:19:34.269241 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:34.269249 | orchestrator | 2026-03-17 01:19:34.269255 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-03-17 01:19:34.269262 | orchestrator | Tuesday 17 March 2026 01:13:10 +0000 (0:00:00.715) 0:02:49.227 ********* 2026-03-17 01:19:34.269269 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-17 01:19:34.269276 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-17 01:19:34.269282 | orchestrator | 2026-03-17 01:19:34.269288 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-03-17 01:19:34.269294 | orchestrator | Tuesday 17 March 2026 01:13:13 +0000 (0:00:03.028) 0:02:52.256 ********* 2026-03-17 01:19:34.269301 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-17 01:19:34.269310 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-17 01:19:34.269328 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-17 01:19:34.269335 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-17 01:19:34.269341 | orchestrator | 2026-03-17 01:19:34.269347 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-17 01:19:34.269353 | orchestrator | Tuesday 17 March 2026 01:13:20 +0000 (0:00:06.656) 0:02:58.913 ********* 2026-03-17 01:19:34.269359 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-17 01:19:34.269366 | orchestrator | 2026-03-17 01:19:34.269416 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-17 01:19:34.269425 | orchestrator | Tuesday 17 March 2026 01:13:23 +0000 (0:00:03.279) 0:03:02.192 ********* 2026-03-17 01:19:34.269431 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-17 01:19:34.269437 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-17 01:19:34.269443 | orchestrator | 2026-03-17 01:19:34.269450 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-17 01:19:34.269456 | orchestrator | Tuesday 17 March 2026 01:13:27 +0000 (0:00:04.357) 0:03:06.550 ********* 2026-03-17 01:19:34.269462 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-17 01:19:34.269468 | orchestrator | 2026-03-17 01:19:34.269484 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-03-17 01:19:34.269490 | orchestrator | Tuesday 17 March 2026 01:13:30 +0000 (0:00:02.916) 0:03:09.467 ********* 2026-03-17 01:19:34.269495 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-17 01:19:34.269501 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-17 01:19:34.269528 | orchestrator | 2026-03-17 01:19:34.269534 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-17 01:19:34.269632 | orchestrator | Tuesday 17 March 2026 01:13:37 +0000 (0:00:06.698) 0:03:16.166 ********* 2026-03-17 01:19:34.269650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.269662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.269680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.269693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.269779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.269791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.269797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.269811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.269823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.269829 | orchestrator | 2026-03-17 01:19:34.269865 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-17 01:19:34.269874 | orchestrator | Tuesday 17 March 2026 01:13:39 +0000 (0:00:02.131) 0:03:18.297 ********* 2026-03-17 01:19:34.269880 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.269887 | orchestrator | 2026-03-17 01:19:34.269892 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-17 01:19:34.269899 | orchestrator | Tuesday 17 March 2026 01:13:39 +0000 (0:00:00.115) 0:03:18.412 ********* 2026-03-17 01:19:34.269905 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.269911 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.269917 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.269923 | orchestrator | 2026-03-17 01:19:34.269929 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-17 01:19:34.269934 | orchestrator | Tuesday 17 March 2026 01:13:40 +0000 (0:00:00.276) 0:03:18.689 ********* 2026-03-17 01:19:34.269940 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-17 01:19:34.269946 | orchestrator | 2026-03-17 01:19:34.269952 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-17 01:19:34.269959 | orchestrator | Tuesday 17 March 2026 01:13:40 +0000 (0:00:00.684) 0:03:19.373 ********* 2026-03-17 01:19:34.269964 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.269970 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.269976 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.269982 | orchestrator | 2026-03-17 01:19:34.269987 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-17 01:19:34.269994 | orchestrator | Tuesday 17 March 2026 01:13:41 +0000 (0:00:00.281) 0:03:19.655 ********* 2026-03-17 01:19:34.270000 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:34.270042 | orchestrator | 2026-03-17 01:19:34.270051 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-17 01:19:34.270057 | orchestrator | Tuesday 17 March 2026 01:13:41 +0000 (0:00:00.620) 0:03:20.275 ********* 2026-03-17 01:19:34.270064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.270215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.270222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.270234 | orchestrator | 2026-03-17 01:19:34.270242 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-17 01:19:34.270249 | orchestrator | Tuesday 17 March 2026 01:13:44 +0000 (0:00:02.966) 0:03:23.242 ********* 2026-03-17 01:19:34.270255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.270279 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.270313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.270341 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.270351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.270430 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.270437 | orchestrator | 2026-03-17 01:19:34.270444 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-17 01:19:34.270450 | orchestrator | Tuesday 17 March 2026 01:13:45 +0000 (0:00:00.639) 0:03:23.881 ********* 2026-03-17 01:19:34.270457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.270508 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.270521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.270544 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.270554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.270602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.270609 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.270616 | orchestrator | 2026-03-17 01:19:34.270623 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-17 01:19:34.270630 | orchestrator | Tuesday 17 March 2026 01:13:46 +0000 (0:00:01.211) 0:03:25.093 ********* 2026-03-17 01:19:34.270638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.270765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.270772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.270779 | orchestrator | 2026-03-17 01:19:34.270785 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-17 01:19:34.270791 | orchestrator | Tuesday 17 March 2026 01:13:49 +0000 (0:00:02.962) 0:03:28.055 ********* 2026-03-17 01:19:34.270799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.270949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.270959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.270966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.270974 | orchestrator | 2026-03-17 01:19:34.270981 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-17 01:19:34.270987 | orchestrator | Tuesday 17 March 2026 01:13:57 +0000 (0:00:07.514) 0:03:35.570 ********* 2026-03-17 01:19:34.270995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro'2026-03-17 01:19:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:19:34.271059 | orchestrator | , '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.271067 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.271094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.271121 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.271148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.271170 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.271177 | orchestrator | 2026-03-17 01:19:34.271184 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-17 01:19:34.271190 | orchestrator | Tuesday 17 March 2026 01:13:57 +0000 (0:00:00.977) 0:03:36.547 ********* 2026-03-17 01:19:34.271197 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.271204 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.271211 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.271217 | orchestrator | 2026-03-17 01:19:34.271224 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-03-17 01:19:34.271231 | orchestrator | Tuesday 17 March 2026 01:13:58 +0000 (0:00:00.933) 0:03:37.481 ********* 2026-03-17 01:19:34.271238 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.271245 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.271251 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.271257 | orchestrator | 2026-03-17 01:19:34.271262 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-03-17 01:19:34.271273 | orchestrator | Tuesday 17 March 2026 01:13:59 +0000 (0:00:00.741) 0:03:38.222 ********* 2026-03-17 01:19:34.271280 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-03-17 01:19:34.271287 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-17 01:19:34.271294 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.271301 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-03-17 01:19:34.271308 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-17 01:19:34.271315 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.271322 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-03-17 01:19:34.271328 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-17 01:19:34.271334 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.271341 | orchestrator | 2026-03-17 01:19:34.271348 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-03-17 01:19:34.271355 | orchestrator | Tuesday 17 March 2026 01:14:00 +0000 (0:00:00.355) 0:03:38.577 ********* 2026-03-17 01:19:34.271362 | orchestrator | included: service-uwsgi-config for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-03-17 01:19:34.271371 | orchestrator | included: service-uwsgi-config for testbed-node-1, testbed-node-0, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-03-17 01:19:34.271396 | orchestrator | 2026-03-17 01:19:34.271402 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-03-17 01:19:34.271409 | orchestrator | Tuesday 17 March 2026 01:14:01 +0000 (0:00:01.877) 0:03:40.455 ********* 2026-03-17 01:19:34.271416 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.271422 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.271430 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.271436 | orchestrator | 2026-03-17 01:19:34.271447 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-03-17 01:19:34.271455 | orchestrator | Tuesday 17 March 2026 01:14:03 +0000 (0:00:01.903) 0:03:42.358 ********* 2026-03-17 01:19:34.271462 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.271469 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.271476 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.271482 | orchestrator | 2026-03-17 01:19:34.271508 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-03-17 01:19:34.271516 | orchestrator | Tuesday 17 March 2026 01:14:06 +0000 (0:00:02.411) 0:03:44.770 ********* 2026-03-17 01:19:34.271524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.271532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.271544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.271571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.271580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.271587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-17 01:19:34.271600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.271607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.271618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.271625 | orchestrator | 2026-03-17 01:19:34.271649 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-03-17 01:19:34.271657 | orchestrator | Tuesday 17 March 2026 01:14:08 +0000 (0:00:02.418) 0:03:47.189 ********* 2026-03-17 01:19:34.271664 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:19:34.271681 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:34.271687 | orchestrator | } 2026-03-17 01:19:34.271695 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:19:34.271703 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:34.271711 | orchestrator | } 2026-03-17 01:19:34.271718 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:19:34.271726 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:34.271733 | orchestrator | } 2026-03-17 01:19:34.271754 | orchestrator | 2026-03-17 01:19:34.271762 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:19:34.271770 | orchestrator | Tuesday 17 March 2026 01:14:08 +0000 (0:00:00.305) 0:03:47.494 ********* 2026-03-17 01:19:34.271778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.271809 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.271842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.271881 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.271890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-17 01:19:34.271931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.271940 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.271952 | orchestrator | 2026-03-17 01:19:34.271959 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-17 01:19:34.271967 | orchestrator | Tuesday 17 March 2026 01:14:09 +0000 (0:00:01.021) 0:03:48.516 ********* 2026-03-17 01:19:34.271975 | orchestrator | 2026-03-17 01:19:34.271982 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-17 01:19:34.271988 | orchestrator | Tuesday 17 March 2026 01:14:10 +0000 (0:00:00.172) 0:03:48.688 ********* 2026-03-17 01:19:34.271996 | orchestrator | 2026-03-17 01:19:34.272004 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-17 01:19:34.272011 | orchestrator | Tuesday 17 March 2026 01:14:10 +0000 (0:00:00.130) 0:03:48.819 ********* 2026-03-17 01:19:34.272019 | orchestrator | 2026-03-17 01:19:34.272026 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-17 01:19:34.272034 | orchestrator | Tuesday 17 March 2026 01:14:10 +0000 (0:00:00.135) 0:03:48.955 ********* 2026-03-17 01:19:34.272041 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.272048 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.272056 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.272063 | orchestrator | 2026-03-17 01:19:34.272070 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-17 01:19:34.272077 | orchestrator | Tuesday 17 March 2026 01:14:25 +0000 (0:00:14.865) 0:04:03.820 ********* 2026-03-17 01:19:34.272084 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.272091 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.272098 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.272105 | orchestrator | 2026-03-17 01:19:34.272112 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-03-17 01:19:34.272119 | orchestrator | Tuesday 17 March 2026 01:14:30 +0000 (0:00:05.553) 0:04:09.373 ********* 2026-03-17 01:19:34.272126 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.272133 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.272149 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.272157 | orchestrator | 2026-03-17 01:19:34.272164 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-17 01:19:34.272170 | orchestrator | 2026-03-17 01:19:34.272177 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:19:34.272184 | orchestrator | Tuesday 17 March 2026 01:14:35 +0000 (0:00:04.520) 0:04:13.894 ********* 2026-03-17 01:19:34.272191 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:34.272197 | orchestrator | 2026-03-17 01:19:34.272203 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:19:34.272209 | orchestrator | Tuesday 17 March 2026 01:14:36 +0000 (0:00:01.127) 0:04:15.022 ********* 2026-03-17 01:19:34.272215 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.272221 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.272227 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.272233 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.272239 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.272246 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.272253 | orchestrator | 2026-03-17 01:19:34.272259 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-03-17 01:19:34.272266 | orchestrator | Tuesday 17 March 2026 01:14:37 +0000 (0:00:00.555) 0:04:15.577 ********* 2026-03-17 01:19:34.272273 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.272279 | orchestrator | 2026-03-17 01:19:34.272286 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-03-17 01:19:34.272293 | orchestrator | Tuesday 17 March 2026 01:14:56 +0000 (0:00:19.648) 0:04:35.226 ********* 2026-03-17 01:19:34.272338 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:19:34.272345 | orchestrator | 2026-03-17 01:19:34.272352 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-03-17 01:19:34.272365 | orchestrator | Tuesday 17 March 2026 01:14:58 +0000 (0:00:01.564) 0:04:36.790 ********* 2026-03-17 01:19:34.272390 | orchestrator | included: service-image-info for testbed-node-3 2026-03-17 01:19:34.272397 | orchestrator | 2026-03-17 01:19:34.272404 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-03-17 01:19:34.272410 | orchestrator | Tuesday 17 March 2026 01:14:58 +0000 (0:00:00.727) 0:04:37.518 ********* 2026-03-17 01:19:34.272416 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:19:34.272423 | orchestrator | 2026-03-17 01:19:34.272430 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-03-17 01:19:34.272437 | orchestrator | Tuesday 17 March 2026 01:15:02 +0000 (0:00:03.057) 0:04:40.576 ********* 2026-03-17 01:19:34.272443 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:19:34.272449 | orchestrator | 2026-03-17 01:19:34.272460 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-03-17 01:19:34.272467 | orchestrator | Tuesday 17 March 2026 01:15:03 +0000 (0:00:01.635) 0:04:42.211 ********* 2026-03-17 01:19:34.272474 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.272481 | orchestrator | 2026-03-17 01:19:34.272488 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-03-17 01:19:34.272494 | orchestrator | Tuesday 17 March 2026 01:15:05 +0000 (0:00:01.795) 0:04:44.007 ********* 2026-03-17 01:19:34.272501 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.272508 | orchestrator | 2026-03-17 01:19:34.272540 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-03-17 01:19:34.272549 | orchestrator | Tuesday 17 March 2026 01:15:07 +0000 (0:00:01.793) 0:04:45.801 ********* 2026-03-17 01:19:34.272556 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-17 01:19:34.272563 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-17 01:19:34.272569 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-17 01:19:34.272576 | orchestrator | 2026-03-17 01:19:34.272583 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-03-17 01:19:34.272590 | orchestrator | Tuesday 17 March 2026 01:15:16 +0000 (0:00:09.161) 0:04:54.962 ********* 2026-03-17 01:19:34.272597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-17 01:19:34.272604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-17 01:19:34.272611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-17 01:19:34.272617 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.272624 | orchestrator | 2026-03-17 01:19:34.272631 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-03-17 01:19:34.272638 | orchestrator | Tuesday 17 March 2026 01:15:21 +0000 (0:00:05.061) 0:05:00.024 ********* 2026-03-17 01:19:34.272646 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-3', 'ansible_loop_var': 'item'})  2026-03-17 01:19:34.272655 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-4', 'ansible_loop_var': 'item'})  2026-03-17 01:19:34.272662 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-5', 'ansible_loop_var': 'item'})  2026-03-17 01:19:34.272669 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.272676 | orchestrator | 2026-03-17 01:19:34.272683 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-17 01:19:34.272696 | orchestrator | Tuesday 17 March 2026 01:15:24 +0000 (0:00:03.240) 0:05:03.265 ********* 2026-03-17 01:19:34.272702 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.272709 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.272715 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.272722 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:19:34.272729 | orchestrator | 2026-03-17 01:19:34.272736 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-17 01:19:34.272743 | orchestrator | Tuesday 17 March 2026 01:15:25 +0000 (0:00:00.942) 0:05:04.208 ********* 2026-03-17 01:19:34.272749 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-17 01:19:34.272757 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-17 01:19:34.272764 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-17 01:19:34.272771 | orchestrator | 2026-03-17 01:19:34.272777 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-17 01:19:34.272783 | orchestrator | Tuesday 17 March 2026 01:15:26 +0000 (0:00:00.725) 0:05:04.933 ********* 2026-03-17 01:19:34.272790 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-17 01:19:34.272796 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-17 01:19:34.272802 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-17 01:19:34.272808 | orchestrator | 2026-03-17 01:19:34.272815 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-17 01:19:34.272821 | orchestrator | Tuesday 17 March 2026 01:15:27 +0000 (0:00:01.410) 0:05:06.344 ********* 2026-03-17 01:19:34.272827 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-17 01:19:34.272833 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.272839 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-17 01:19:34.272845 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.272850 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-17 01:19:34.272856 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.272861 | orchestrator | 2026-03-17 01:19:34.272867 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-17 01:19:34.272873 | orchestrator | Tuesday 17 March 2026 01:15:28 +0000 (0:00:00.528) 0:05:06.872 ********* 2026-03-17 01:19:34.272879 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 01:19:34.272885 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 01:19:34.272895 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.272900 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 01:19:34.272906 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 01:19:34.272913 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-17 01:19:34.272942 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-17 01:19:34.272949 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-17 01:19:34.272955 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.272961 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-17 01:19:34.272967 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-17 01:19:34.272974 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.272981 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-17 01:19:34.272988 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-17 01:19:34.272994 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-17 01:19:34.273001 | orchestrator | 2026-03-17 01:19:34.273008 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-17 01:19:34.273023 | orchestrator | Tuesday 17 March 2026 01:15:29 +0000 (0:00:01.008) 0:05:07.881 ********* 2026-03-17 01:19:34.273029 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.273035 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.273041 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.273048 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.273055 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.273062 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.273069 | orchestrator | 2026-03-17 01:19:34.273076 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-17 01:19:34.273083 | orchestrator | Tuesday 17 March 2026 01:15:30 +0000 (0:00:01.064) 0:05:08.946 ********* 2026-03-17 01:19:34.273089 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.273096 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.273103 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.273109 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.273116 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.273122 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.273129 | orchestrator | 2026-03-17 01:19:34.273136 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-17 01:19:34.273143 | orchestrator | Tuesday 17 March 2026 01:15:31 +0000 (0:00:01.489) 0:05:10.435 ********* 2026-03-17 01:19:34.273150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273160 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273320 | orchestrator | 2026-03-17 01:19:34.273326 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:19:34.273345 | orchestrator | Tuesday 17 March 2026 01:15:34 +0000 (0:00:02.206) 0:05:12.642 ********* 2026-03-17 01:19:34.273353 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-17 01:19:34.273362 | orchestrator | 2026-03-17 01:19:34.273369 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-17 01:19:34.273442 | orchestrator | Tuesday 17 March 2026 01:15:35 +0000 (0:00:01.184) 0:05:13.826 ********* 2026-03-17 01:19:34.273450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.273613 | orchestrator | 2026-03-17 01:19:34.273620 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-17 01:19:34.273627 | orchestrator | Tuesday 17 March 2026 01:15:39 +0000 (0:00:04.069) 0:05:17.896 ********* 2026-03-17 01:19:34.273635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.273652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.273676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.273683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.273692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.273699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.273706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.273718 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.273741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.273748 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.273755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.273761 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.273768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.273776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.273783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.273796 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.273803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.273813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.273819 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.273841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.273848 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.273856 | orchestrator | 2026-03-17 01:19:34.273862 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-17 01:19:34.273868 | orchestrator | Tuesday 17 March 2026 01:15:41 +0000 (0:00:01.780) 0:05:19.677 ********* 2026-03-17 01:19:34.273874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.273880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.273892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.273898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.273927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.273935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.273942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.273948 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.273955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.273965 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.273973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.273980 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.273989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.274012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.274075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.274081 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.274087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.274099 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.274105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.274111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.274118 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.274124 | orchestrator | 2026-03-17 01:19:34.274130 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:19:34.274137 | orchestrator | Tuesday 17 March 2026 01:15:43 +0000 (0:00:02.492) 0:05:22.169 ********* 2026-03-17 01:19:34.274143 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.274148 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.274154 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.274161 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:19:34.274168 | orchestrator | 2026-03-17 01:19:34.274174 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-17 01:19:34.274180 | orchestrator | Tuesday 17 March 2026 01:15:44 +0000 (0:00:00.996) 0:05:23.166 ********* 2026-03-17 01:19:34.274186 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:19:34.274192 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:19:34.274206 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:19:34.274212 | orchestrator | 2026-03-17 01:19:34.274218 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-17 01:19:34.274225 | orchestrator | Tuesday 17 March 2026 01:15:45 +0000 (0:00:00.949) 0:05:24.115 ********* 2026-03-17 01:19:34.274231 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:19:34.274238 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:19:34.274265 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:19:34.274273 | orchestrator | 2026-03-17 01:19:34.274279 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-17 01:19:34.274285 | orchestrator | Tuesday 17 March 2026 01:15:46 +0000 (0:00:00.795) 0:05:24.910 ********* 2026-03-17 01:19:34.274292 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:19:34.274298 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:19:34.274305 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:19:34.274312 | orchestrator | 2026-03-17 01:19:34.274318 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-17 01:19:34.274325 | orchestrator | Tuesday 17 March 2026 01:15:46 +0000 (0:00:00.506) 0:05:25.417 ********* 2026-03-17 01:19:34.274330 | orchestrator | ok: [testbed-node-3] 2026-03-17 01:19:34.274337 | orchestrator | ok: [testbed-node-4] 2026-03-17 01:19:34.274343 | orchestrator | ok: [testbed-node-5] 2026-03-17 01:19:34.274350 | orchestrator | 2026-03-17 01:19:34.274357 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-17 01:19:34.274364 | orchestrator | Tuesday 17 March 2026 01:15:47 +0000 (0:00:00.566) 0:05:25.983 ********* 2026-03-17 01:19:34.274370 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-17 01:19:34.274404 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-17 01:19:34.274411 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-17 01:19:34.274418 | orchestrator | 2026-03-17 01:19:34.274424 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-17 01:19:34.274430 | orchestrator | Tuesday 17 March 2026 01:15:48 +0000 (0:00:01.245) 0:05:27.229 ********* 2026-03-17 01:19:34.274437 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-17 01:19:34.274444 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-17 01:19:34.274450 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-17 01:19:34.274457 | orchestrator | 2026-03-17 01:19:34.274464 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-17 01:19:34.274471 | orchestrator | Tuesday 17 March 2026 01:15:49 +0000 (0:00:01.238) 0:05:28.468 ********* 2026-03-17 01:19:34.274477 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-17 01:19:34.274483 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-17 01:19:34.274490 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-17 01:19:34.274495 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-17 01:19:34.274500 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-17 01:19:34.274507 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-17 01:19:34.274513 | orchestrator | 2026-03-17 01:19:34.274520 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-17 01:19:34.274527 | orchestrator | Tuesday 17 March 2026 01:15:53 +0000 (0:00:03.445) 0:05:31.913 ********* 2026-03-17 01:19:34.274534 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.274541 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.274547 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.274554 | orchestrator | 2026-03-17 01:19:34.274560 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-17 01:19:34.274566 | orchestrator | Tuesday 17 March 2026 01:15:53 +0000 (0:00:00.486) 0:05:32.400 ********* 2026-03-17 01:19:34.274573 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.274579 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.274585 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.274592 | orchestrator | 2026-03-17 01:19:34.274598 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-17 01:19:34.274605 | orchestrator | Tuesday 17 March 2026 01:15:54 +0000 (0:00:00.296) 0:05:32.696 ********* 2026-03-17 01:19:34.274611 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.274618 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.274625 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.274632 | orchestrator | 2026-03-17 01:19:34.274638 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-17 01:19:34.274645 | orchestrator | Tuesday 17 March 2026 01:15:55 +0000 (0:00:01.090) 0:05:33.787 ********* 2026-03-17 01:19:34.274653 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-03-17 01:19:34.274661 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-03-17 01:19:34.274667 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-03-17 01:19:34.274675 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-03-17 01:19:34.274683 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-03-17 01:19:34.274705 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-03-17 01:19:34.274712 | orchestrator | 2026-03-17 01:19:34.274718 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-17 01:19:34.274750 | orchestrator | Tuesday 17 March 2026 01:15:58 +0000 (0:00:03.346) 0:05:37.134 ********* 2026-03-17 01:19:34.274759 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:19:34.274766 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:19:34.274773 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:19:34.274779 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-17 01:19:34.274785 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.274792 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-17 01:19:34.274799 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.274805 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-17 01:19:34.274811 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.274817 | orchestrator | 2026-03-17 01:19:34.274823 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-17 01:19:34.274829 | orchestrator | Tuesday 17 March 2026 01:16:01 +0000 (0:00:03.274) 0:05:40.408 ********* 2026-03-17 01:19:34.274835 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.274841 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.274847 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.274853 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-17 01:19:34.274860 | orchestrator | 2026-03-17 01:19:34.274867 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-17 01:19:34.274873 | orchestrator | Tuesday 17 March 2026 01:16:03 +0000 (0:00:01.511) 0:05:41.919 ********* 2026-03-17 01:19:34.274879 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:19:34.274886 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-17 01:19:34.274893 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-17 01:19:34.274899 | orchestrator | 2026-03-17 01:19:34.274905 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-17 01:19:34.274911 | orchestrator | Tuesday 17 March 2026 01:16:04 +0000 (0:00:01.147) 0:05:43.067 ********* 2026-03-17 01:19:34.274918 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.274924 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.274931 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.274938 | orchestrator | 2026-03-17 01:19:34.274944 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-17 01:19:34.274951 | orchestrator | Tuesday 17 March 2026 01:16:04 +0000 (0:00:00.276) 0:05:43.343 ********* 2026-03-17 01:19:34.274957 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.274964 | orchestrator | 2026-03-17 01:19:34.274969 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-17 01:19:34.274975 | orchestrator | Tuesday 17 March 2026 01:16:04 +0000 (0:00:00.131) 0:05:43.475 ********* 2026-03-17 01:19:34.274981 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.274987 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.274993 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.274999 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.275006 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.275013 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.275020 | orchestrator | 2026-03-17 01:19:34.275026 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-17 01:19:34.275033 | orchestrator | Tuesday 17 March 2026 01:16:05 +0000 (0:00:00.742) 0:05:44.218 ********* 2026-03-17 01:19:34.275045 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-17 01:19:34.275052 | orchestrator | 2026-03-17 01:19:34.275058 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-17 01:19:34.275064 | orchestrator | Tuesday 17 March 2026 01:16:06 +0000 (0:00:00.734) 0:05:44.952 ********* 2026-03-17 01:19:34.275070 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.275077 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.275083 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.275090 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.275096 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.275103 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.275110 | orchestrator | 2026-03-17 01:19:34.275117 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-17 01:19:34.275124 | orchestrator | Tuesday 17 March 2026 01:16:06 +0000 (0:00:00.542) 0:05:45.494 ********* 2026-03-17 01:19:34.275133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275215 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275312 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275325 | orchestrator | 2026-03-17 01:19:34.275332 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-17 01:19:34.275339 | orchestrator | Tuesday 17 March 2026 01:16:11 +0000 (0:00:04.169) 0:05:49.663 ********* 2026-03-17 01:19:34.275345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.275357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.275363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.275416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.275450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.275457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.275471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.275544 | orchestrator | 2026-03-17 01:19:34.275552 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-17 01:19:34.275558 | orchestrator | Tuesday 17 March 2026 01:16:16 +0000 (0:00:05.578) 0:05:55.242 ********* 2026-03-17 01:19:34.275565 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.275572 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.275578 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.275585 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.275649 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.275656 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.275662 | orchestrator | 2026-03-17 01:19:34.275669 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-17 01:19:34.275675 | orchestrator | Tuesday 17 March 2026 01:16:18 +0000 (0:00:01.697) 0:05:56.940 ********* 2026-03-17 01:19:34.275682 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-17 01:19:34.275696 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-17 01:19:34.275703 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-17 01:19:34.275710 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-17 01:19:34.275716 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.275732 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-17 01:19:34.275737 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.275742 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-17 01:19:34.275748 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-17 01:19:34.275759 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.275766 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-17 01:19:34.275772 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-17 01:19:34.275778 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-17 01:19:34.275784 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-17 01:19:34.275791 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-17 01:19:34.275796 | orchestrator | 2026-03-17 01:19:34.275802 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-17 01:19:34.275808 | orchestrator | Tuesday 17 March 2026 01:16:21 +0000 (0:00:03.553) 0:06:00.493 ********* 2026-03-17 01:19:34.275813 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.275819 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.275825 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.275831 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.275837 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.275842 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.275848 | orchestrator | 2026-03-17 01:19:34.275853 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-17 01:19:34.275858 | orchestrator | Tuesday 17 March 2026 01:16:22 +0000 (0:00:00.724) 0:06:01.218 ********* 2026-03-17 01:19:34.275863 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-17 01:19:34.275870 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-17 01:19:34.275875 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-17 01:19:34.275881 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-17 01:19:34.275887 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-17 01:19:34.275895 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-17 01:19:34.275901 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-17 01:19:34.275906 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-17 01:19:34.275912 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-17 01:19:34.275918 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-17 01:19:34.275923 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.275929 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-17 01:19:34.275935 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.275940 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-17 01:19:34.275946 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.275952 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:19:34.275958 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:19:34.275964 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:19:34.275977 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:19:34.275983 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:19:34.275990 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-17 01:19:34.275996 | orchestrator | 2026-03-17 01:19:34.276002 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-17 01:19:34.276014 | orchestrator | Tuesday 17 March 2026 01:16:27 +0000 (0:00:04.696) 0:06:05.914 ********* 2026-03-17 01:19:34.276021 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:19:34.276027 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:19:34.276033 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-17 01:19:34.276046 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-17 01:19:34.276052 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-17 01:19:34.276058 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:19:34.276064 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:19:34.276071 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-17 01:19:34.276076 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-17 01:19:34.276081 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:19:34.276087 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:19:34.276094 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-17 01:19:34.276100 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-17 01:19:34.276105 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.276111 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-17 01:19:34.276117 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.276122 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-17 01:19:34.276128 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.276134 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:19:34.276140 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:19:34.276145 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-17 01:19:34.276150 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:19:34.276156 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:19:34.276162 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-17 01:19:34.276168 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:19:34.276174 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:19:34.276180 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-17 01:19:34.276185 | orchestrator | 2026-03-17 01:19:34.276192 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-17 01:19:34.276198 | orchestrator | Tuesday 17 March 2026 01:16:34 +0000 (0:00:07.316) 0:06:13.231 ********* 2026-03-17 01:19:34.276204 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.276216 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.276222 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.276228 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.276234 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.276240 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.276245 | orchestrator | 2026-03-17 01:19:34.276251 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-17 01:19:34.276258 | orchestrator | Tuesday 17 March 2026 01:16:35 +0000 (0:00:00.825) 0:06:14.056 ********* 2026-03-17 01:19:34.276264 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.276270 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.276275 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.276280 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.276286 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.276292 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.276297 | orchestrator | 2026-03-17 01:19:34.276303 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-17 01:19:34.276310 | orchestrator | Tuesday 17 March 2026 01:16:36 +0000 (0:00:00.613) 0:06:14.669 ********* 2026-03-17 01:19:34.276316 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.276322 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.276327 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.276332 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.276338 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.276343 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.276349 | orchestrator | 2026-03-17 01:19:34.276354 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-03-17 01:19:34.276360 | orchestrator | Tuesday 17 March 2026 01:16:38 +0000 (0:00:02.119) 0:06:16.788 ********* 2026-03-17 01:19:34.276366 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.276393 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.276399 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.276406 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.276411 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.276417 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.276423 | orchestrator | 2026-03-17 01:19:34.276428 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-17 01:19:34.276434 | orchestrator | Tuesday 17 March 2026 01:16:40 +0000 (0:00:02.087) 0:06:18.876 ********* 2026-03-17 01:19:34.276454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.276463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.276470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.276483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.276490 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.276497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.276512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.276520 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.276526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.276538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.276545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.276552 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.276559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.276566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.276573 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.276587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.276595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.276607 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.276614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.276621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.276628 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.276634 | orchestrator | 2026-03-17 01:19:34.276641 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-17 01:19:34.276648 | orchestrator | Tuesday 17 March 2026 01:16:41 +0000 (0:00:01.555) 0:06:20.432 ********* 2026-03-17 01:19:34.276655 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-17 01:19:34.276662 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-17 01:19:34.276669 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.276675 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-17 01:19:34.276680 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-17 01:19:34.276687 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.276693 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-17 01:19:34.276700 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-17 01:19:34.276706 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.276713 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-17 01:19:34.276720 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-17 01:19:34.276727 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.276734 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-17 01:19:34.276740 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-17 01:19:34.276747 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.276753 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-17 01:19:34.276760 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-17 01:19:34.276766 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.276772 | orchestrator | 2026-03-17 01:19:34.276779 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-03-17 01:19:34.276786 | orchestrator | Tuesday 17 March 2026 01:16:42 +0000 (0:00:00.655) 0:06:21.087 ********* 2026-03-17 01:19:34.276801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-17 01:19:34.276941 | orchestrator | 2026-03-17 01:19:34.276948 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-03-17 01:19:34.276955 | orchestrator | Tuesday 17 March 2026 01:16:45 +0000 (0:00:02.805) 0:06:23.892 ********* 2026-03-17 01:19:34.276962 | orchestrator | changed: [testbed-node-3] => { 2026-03-17 01:19:34.276969 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:34.276976 | orchestrator | } 2026-03-17 01:19:34.276983 | orchestrator | changed: [testbed-node-4] => { 2026-03-17 01:19:34.276989 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:34.276996 | orchestrator | } 2026-03-17 01:19:34.277003 | orchestrator | changed: [testbed-node-5] => { 2026-03-17 01:19:34.277009 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:34.277016 | orchestrator | } 2026-03-17 01:19:34.277022 | orchestrator | changed: [testbed-node-0] => { 2026-03-17 01:19:34.277029 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:34.277036 | orchestrator | } 2026-03-17 01:19:34.277043 | orchestrator | changed: [testbed-node-1] => { 2026-03-17 01:19:34.277049 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:34.277055 | orchestrator | } 2026-03-17 01:19:34.277062 | orchestrator | changed: [testbed-node-2] => { 2026-03-17 01:19:34.277068 | orchestrator |  "msg": "Notifying handlers" 2026-03-17 01:19:34.277074 | orchestrator | } 2026-03-17 01:19:34.277080 | orchestrator | 2026-03-17 01:19:34.277086 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-17 01:19:34.277093 | orchestrator | Tuesday 17 March 2026 01:16:45 +0000 (0:00:00.599) 0:06:24.492 ********* 2026-03-17 01:19:34.277101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.277113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.277131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.277139 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.277146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.277153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.277160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.277173 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.277180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-17 01:19:34.277195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-17 01:19:34.277202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.277209 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.277216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.277223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.277230 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.277237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.277249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.277257 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.277268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-17 01:19:34.277279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-17 01:19:34.277286 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.277294 | orchestrator | 2026-03-17 01:19:34.277301 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-17 01:19:34.277308 | orchestrator | Tuesday 17 March 2026 01:16:47 +0000 (0:00:02.043) 0:06:26.536 ********* 2026-03-17 01:19:34.277315 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.277321 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.277328 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.277335 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.277342 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.277349 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.277355 | orchestrator | 2026-03-17 01:19:34.277362 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:19:34.277369 | orchestrator | Tuesday 17 March 2026 01:16:48 +0000 (0:00:00.693) 0:06:27.230 ********* 2026-03-17 01:19:34.277397 | orchestrator | 2026-03-17 01:19:34.277404 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:19:34.277411 | orchestrator | Tuesday 17 March 2026 01:16:48 +0000 (0:00:00.126) 0:06:27.356 ********* 2026-03-17 01:19:34.277417 | orchestrator | 2026-03-17 01:19:34.277424 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:19:34.277430 | orchestrator | Tuesday 17 March 2026 01:16:48 +0000 (0:00:00.127) 0:06:27.484 ********* 2026-03-17 01:19:34.277438 | orchestrator | 2026-03-17 01:19:34.277445 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:19:34.277452 | orchestrator | Tuesday 17 March 2026 01:16:49 +0000 (0:00:00.125) 0:06:27.609 ********* 2026-03-17 01:19:34.277458 | orchestrator | 2026-03-17 01:19:34.277470 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:19:34.277477 | orchestrator | Tuesday 17 March 2026 01:16:49 +0000 (0:00:00.124) 0:06:27.734 ********* 2026-03-17 01:19:34.277484 | orchestrator | 2026-03-17 01:19:34.277490 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-17 01:19:34.277497 | orchestrator | Tuesday 17 March 2026 01:16:49 +0000 (0:00:00.253) 0:06:27.987 ********* 2026-03-17 01:19:34.277504 | orchestrator | 2026-03-17 01:19:34.277511 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-17 01:19:34.277518 | orchestrator | Tuesday 17 March 2026 01:16:49 +0000 (0:00:00.123) 0:06:28.110 ********* 2026-03-17 01:19:34.277525 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.277532 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.277539 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.277546 | orchestrator | 2026-03-17 01:19:34.277553 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-17 01:19:34.277560 | orchestrator | Tuesday 17 March 2026 01:17:01 +0000 (0:00:11.457) 0:06:39.568 ********* 2026-03-17 01:19:34.277567 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.277574 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.277580 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.277586 | orchestrator | 2026-03-17 01:19:34.277593 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-17 01:19:34.277599 | orchestrator | Tuesday 17 March 2026 01:17:18 +0000 (0:00:17.100) 0:06:56.668 ********* 2026-03-17 01:19:34.277606 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.277612 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.277618 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.277625 | orchestrator | 2026-03-17 01:19:34.277632 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-17 01:19:34.277639 | orchestrator | Tuesday 17 March 2026 01:17:34 +0000 (0:00:16.028) 0:07:12.697 ********* 2026-03-17 01:19:34.277645 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.277651 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.277658 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.277665 | orchestrator | 2026-03-17 01:19:34.277671 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-17 01:19:34.277678 | orchestrator | Tuesday 17 March 2026 01:18:02 +0000 (0:00:27.912) 0:07:40.609 ********* 2026-03-17 01:19:34.277684 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.277691 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.277697 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.277704 | orchestrator | 2026-03-17 01:19:34.277710 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-17 01:19:34.277717 | orchestrator | Tuesday 17 March 2026 01:18:02 +0000 (0:00:00.709) 0:07:41.318 ********* 2026-03-17 01:19:34.277723 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.277730 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.277736 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.277743 | orchestrator | 2026-03-17 01:19:34.277749 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-17 01:19:34.277756 | orchestrator | Tuesday 17 March 2026 01:18:03 +0000 (0:00:00.696) 0:07:42.015 ********* 2026-03-17 01:19:34.277763 | orchestrator | changed: [testbed-node-3] 2026-03-17 01:19:34.277773 | orchestrator | changed: [testbed-node-4] 2026-03-17 01:19:34.277780 | orchestrator | changed: [testbed-node-5] 2026-03-17 01:19:34.277786 | orchestrator | 2026-03-17 01:19:34.277793 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-17 01:19:34.277800 | orchestrator | Tuesday 17 March 2026 01:18:26 +0000 (0:00:22.653) 0:08:04.669 ********* 2026-03-17 01:19:34.277806 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.277813 | orchestrator | 2026-03-17 01:19:34.277824 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-17 01:19:34.277835 | orchestrator | Tuesday 17 March 2026 01:18:26 +0000 (0:00:00.125) 0:08:04.794 ********* 2026-03-17 01:19:34.277841 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.277847 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.277853 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.277859 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.277865 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.277872 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-17 01:19:34.277879 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:19:34.277885 | orchestrator | 2026-03-17 01:19:34.277892 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-17 01:19:34.277898 | orchestrator | Tuesday 17 March 2026 01:18:46 +0000 (0:00:20.062) 0:08:24.857 ********* 2026-03-17 01:19:34.277905 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.277911 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.277917 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.277924 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.277930 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.277937 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.277943 | orchestrator | 2026-03-17 01:19:34.277950 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-17 01:19:34.277957 | orchestrator | Tuesday 17 March 2026 01:18:55 +0000 (0:00:09.680) 0:08:34.538 ********* 2026-03-17 01:19:34.277962 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.277969 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.277976 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.277982 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.277988 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.277995 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-17 01:19:34.278001 | orchestrator | 2026-03-17 01:19:34.278007 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-17 01:19:34.278052 | orchestrator | Tuesday 17 March 2026 01:18:59 +0000 (0:00:03.175) 0:08:37.714 ********* 2026-03-17 01:19:34.278061 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:19:34.278067 | orchestrator | 2026-03-17 01:19:34.278074 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-17 01:19:34.278081 | orchestrator | Tuesday 17 March 2026 01:19:13 +0000 (0:00:14.680) 0:08:52.394 ********* 2026-03-17 01:19:34.278087 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:19:34.278094 | orchestrator | 2026-03-17 01:19:34.278101 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-17 01:19:34.278107 | orchestrator | Tuesday 17 March 2026 01:19:15 +0000 (0:00:01.471) 0:08:53.865 ********* 2026-03-17 01:19:34.278113 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.278120 | orchestrator | 2026-03-17 01:19:34.278127 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-17 01:19:34.278134 | orchestrator | Tuesday 17 March 2026 01:19:16 +0000 (0:00:01.465) 0:08:55.331 ********* 2026-03-17 01:19:34.278141 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-17 01:19:34.278148 | orchestrator | 2026-03-17 01:19:34.278154 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-17 01:19:34.278161 | orchestrator | 2026-03-17 01:19:34.278168 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-17 01:19:34.278174 | orchestrator | Tuesday 17 March 2026 01:19:28 +0000 (0:00:11.340) 0:09:06.672 ********* 2026-03-17 01:19:34.278180 | orchestrator | changed: [testbed-node-0] 2026-03-17 01:19:34.278186 | orchestrator | changed: [testbed-node-1] 2026-03-17 01:19:34.278193 | orchestrator | changed: [testbed-node-2] 2026-03-17 01:19:34.278199 | orchestrator | 2026-03-17 01:19:34.278206 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-17 01:19:34.278217 | orchestrator | 2026-03-17 01:19:34.278225 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-17 01:19:34.278232 | orchestrator | Tuesday 17 March 2026 01:19:29 +0000 (0:00:00.893) 0:09:07.565 ********* 2026-03-17 01:19:34.278238 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.278245 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.278252 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.278258 | orchestrator | 2026-03-17 01:19:34.278265 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-17 01:19:34.278271 | orchestrator | 2026-03-17 01:19:34.278279 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-17 01:19:34.278285 | orchestrator | Tuesday 17 March 2026 01:19:29 +0000 (0:00:00.714) 0:09:08.280 ********* 2026-03-17 01:19:34.278292 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-17 01:19:34.278299 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-17 01:19:34.278306 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-17 01:19:34.278312 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-17 01:19:34.278319 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-17 01:19:34.278326 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-17 01:19:34.278333 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-17 01:19:34.278340 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-17 01:19:34.278351 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-17 01:19:34.278358 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-17 01:19:34.278365 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-17 01:19:34.278425 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-17 01:19:34.278433 | orchestrator | skipping: [testbed-node-3] 2026-03-17 01:19:34.278441 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-17 01:19:34.278454 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-17 01:19:34.278460 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-17 01:19:34.278467 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-17 01:19:34.278473 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-17 01:19:34.278480 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-17 01:19:34.278486 | orchestrator | skipping: [testbed-node-4] 2026-03-17 01:19:34.278492 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-17 01:19:34.278498 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-17 01:19:34.278505 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-17 01:19:34.278510 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-17 01:19:34.278517 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-17 01:19:34.278523 | orchestrator | skipping: [testbed-node-5] 2026-03-17 01:19:34.278530 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-17 01:19:34.278536 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-17 01:19:34.278542 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-17 01:19:34.278549 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-17 01:19:34.278556 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-17 01:19:34.278562 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-17 01:19:34.278569 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-17 01:19:34.278575 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.278581 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.278587 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-17 01:19:34.278598 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-17 01:19:34.278605 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-17 01:19:34.278612 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-17 01:19:34.278619 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-17 01:19:34.278625 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-17 01:19:34.278632 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.278639 | orchestrator | 2026-03-17 01:19:34.278645 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-17 01:19:34.278651 | orchestrator | 2026-03-17 01:19:34.278658 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-17 01:19:34.278665 | orchestrator | Tuesday 17 March 2026 01:19:31 +0000 (0:00:01.315) 0:09:09.595 ********* 2026-03-17 01:19:34.278671 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-17 01:19:34.278678 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-17 01:19:34.278685 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.278692 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-17 01:19:34.278699 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-17 01:19:34.278705 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.278711 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-17 01:19:34.278717 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-17 01:19:34.278725 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.278731 | orchestrator | 2026-03-17 01:19:34.278737 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-17 01:19:34.278744 | orchestrator | 2026-03-17 01:19:34.278751 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-17 01:19:34.278757 | orchestrator | Tuesday 17 March 2026 01:19:31 +0000 (0:00:00.502) 0:09:10.098 ********* 2026-03-17 01:19:34.278764 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.278770 | orchestrator | 2026-03-17 01:19:34.278777 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-17 01:19:34.278784 | orchestrator | 2026-03-17 01:19:34.278790 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-17 01:19:34.278797 | orchestrator | Tuesday 17 March 2026 01:19:32 +0000 (0:00:01.132) 0:09:11.231 ********* 2026-03-17 01:19:34.278804 | orchestrator | skipping: [testbed-node-0] 2026-03-17 01:19:34.278810 | orchestrator | skipping: [testbed-node-1] 2026-03-17 01:19:34.278817 | orchestrator | skipping: [testbed-node-2] 2026-03-17 01:19:34.278823 | orchestrator | 2026-03-17 01:19:34.278830 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:19:34.278837 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:19:34.278846 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=50  rescued=0 ignored=0 2026-03-17 01:19:34.278853 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-03-17 01:19:34.278864 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-03-17 01:19:34.278871 | orchestrator | testbed-node-3 : ok=47  changed=30  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2026-03-17 01:19:34.278882 | orchestrator | testbed-node-4 : ok=45  changed=29  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-17 01:19:34.278889 | orchestrator | testbed-node-5 : ok=40  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-17 01:19:34.278903 | orchestrator | 2026-03-17 01:19:34.278910 | orchestrator | 2026-03-17 01:19:34.278916 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:19:34.278923 | orchestrator | Tuesday 17 March 2026 01:19:33 +0000 (0:00:00.449) 0:09:11.680 ********* 2026-03-17 01:19:34.278930 | orchestrator | =============================================================================== 2026-03-17 01:19:34.278936 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.80s 2026-03-17 01:19:34.278943 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.91s 2026-03-17 01:19:34.278949 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.57s 2026-03-17 01:19:34.278956 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.65s 2026-03-17 01:19:34.278962 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.06s 2026-03-17 01:19:34.278968 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 19.65s 2026-03-17 01:19:34.278975 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.84s 2026-03-17 01:19:34.278981 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.10s 2026-03-17 01:19:34.278988 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 16.03s 2026-03-17 01:19:34.278995 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.95s 2026-03-17 01:19:34.279001 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 14.87s 2026-03-17 01:19:34.279008 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.68s 2026-03-17 01:19:34.279014 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.89s 2026-03-17 01:19:34.279021 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.42s 2026-03-17 01:19:34.279028 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.81s 2026-03-17 01:19:34.279034 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.46s 2026-03-17 01:19:34.279041 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.34s 2026-03-17 01:19:34.279047 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.72s 2026-03-17 01:19:34.279054 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.68s 2026-03-17 01:19:34.279060 | orchestrator | nova-cell : Get container facts ----------------------------------------- 9.16s 2026-03-17 01:19:37.305705 | orchestrator | 2026-03-17 01:19:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:19:40.337913 | orchestrator | 2026-03-17 01:19:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:19:43.375716 | orchestrator | 2026-03-17 01:19:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:19:46.416328 | orchestrator | 2026-03-17 01:19:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:19:49.454494 | orchestrator | 2026-03-17 01:19:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:19:52.495334 | orchestrator | 2026-03-17 01:19:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:19:55.538311 | orchestrator | 2026-03-17 01:19:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:19:58.576019 | orchestrator | 2026-03-17 01:19:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:01.614252 | orchestrator | 2026-03-17 01:20:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:04.651049 | orchestrator | 2026-03-17 01:20:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:07.687975 | orchestrator | 2026-03-17 01:20:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:10.729854 | orchestrator | 2026-03-17 01:20:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:13.774741 | orchestrator | 2026-03-17 01:20:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:16.814709 | orchestrator | 2026-03-17 01:20:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:19.851828 | orchestrator | 2026-03-17 01:20:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:22.892160 | orchestrator | 2026-03-17 01:20:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:25.929213 | orchestrator | 2026-03-17 01:20:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:28.970832 | orchestrator | 2026-03-17 01:20:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:32.012168 | orchestrator | 2026-03-17 01:20:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-17 01:20:35.058644 | orchestrator | 2026-03-17 01:20:35.238373 | orchestrator | 2026-03-17 01:20:35.244824 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Mar 17 01:20:35 UTC 2026 2026-03-17 01:20:35.246214 | orchestrator | 2026-03-17 01:20:35.653197 | orchestrator | ok: Runtime: 0:33:38.280818 2026-03-17 01:20:35.911434 | 2026-03-17 01:20:35.911618 | TASK [Bootstrap services] 2026-03-17 01:20:36.688788 | orchestrator | 2026-03-17 01:20:36.688957 | orchestrator | # BOOTSTRAP 2026-03-17 01:20:36.688975 | orchestrator | 2026-03-17 01:20:36.688982 | orchestrator | + set -e 2026-03-17 01:20:36.688990 | orchestrator | + echo 2026-03-17 01:20:36.688997 | orchestrator | + echo '# BOOTSTRAP' 2026-03-17 01:20:36.689008 | orchestrator | + echo 2026-03-17 01:20:36.689035 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-17 01:20:36.697371 | orchestrator | + set -e 2026-03-17 01:20:36.697451 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-17 01:20:41.288043 | orchestrator | 2026-03-17 01:20:41 | INFO  | It takes a moment until task 582030cb-cf21-4c90-9736-1bd7674ecb9a (flavor-manager) has been started and output is visible here. 2026-03-17 01:20:50.399865 | orchestrator | 2026-03-17 01:20:45 | INFO  | Flavor SCS-1L-1 created 2026-03-17 01:20:50.399955 | orchestrator | 2026-03-17 01:20:45 | INFO  | Flavor SCS-1L-1-5 created 2026-03-17 01:20:50.399963 | orchestrator | 2026-03-17 01:20:46 | INFO  | Flavor SCS-1V-2 created 2026-03-17 01:20:50.399968 | orchestrator | 2026-03-17 01:20:46 | INFO  | Flavor SCS-1V-2-5 created 2026-03-17 01:20:50.399972 | orchestrator | 2026-03-17 01:20:46 | INFO  | Flavor SCS-1V-4 created 2026-03-17 01:20:50.399976 | orchestrator | 2026-03-17 01:20:46 | INFO  | Flavor SCS-1V-4-10 created 2026-03-17 01:20:50.399980 | orchestrator | 2026-03-17 01:20:46 | INFO  | Flavor SCS-1V-8 created 2026-03-17 01:20:50.399985 | orchestrator | 2026-03-17 01:20:47 | INFO  | Flavor SCS-1V-8-20 created 2026-03-17 01:20:50.399995 | orchestrator | 2026-03-17 01:20:47 | INFO  | Flavor SCS-2V-4 created 2026-03-17 01:20:50.399999 | orchestrator | 2026-03-17 01:20:47 | INFO  | Flavor SCS-2V-4-10 created 2026-03-17 01:20:50.400003 | orchestrator | 2026-03-17 01:20:47 | INFO  | Flavor SCS-2V-8 created 2026-03-17 01:20:50.400007 | orchestrator | 2026-03-17 01:20:47 | INFO  | Flavor SCS-2V-8-20 created 2026-03-17 01:20:50.400011 | orchestrator | 2026-03-17 01:20:47 | INFO  | Flavor SCS-2V-16 created 2026-03-17 01:20:50.400015 | orchestrator | 2026-03-17 01:20:47 | INFO  | Flavor SCS-2V-16-50 created 2026-03-17 01:20:50.400019 | orchestrator | 2026-03-17 01:20:48 | INFO  | Flavor SCS-4V-8 created 2026-03-17 01:20:50.400024 | orchestrator | 2026-03-17 01:20:48 | INFO  | Flavor SCS-4V-8-20 created 2026-03-17 01:20:50.400027 | orchestrator | 2026-03-17 01:20:48 | INFO  | Flavor SCS-4V-16 created 2026-03-17 01:20:50.400031 | orchestrator | 2026-03-17 01:20:48 | INFO  | Flavor SCS-4V-16-50 created 2026-03-17 01:20:50.400035 | orchestrator | 2026-03-17 01:20:48 | INFO  | Flavor SCS-4V-32 created 2026-03-17 01:20:50.400040 | orchestrator | 2026-03-17 01:20:48 | INFO  | Flavor SCS-4V-32-100 created 2026-03-17 01:20:50.400044 | orchestrator | 2026-03-17 01:20:48 | INFO  | Flavor SCS-8V-16 created 2026-03-17 01:20:50.400048 | orchestrator | 2026-03-17 01:20:49 | INFO  | Flavor SCS-8V-16-50 created 2026-03-17 01:20:50.400052 | orchestrator | 2026-03-17 01:20:49 | INFO  | Flavor SCS-8V-32 created 2026-03-17 01:20:50.400056 | orchestrator | 2026-03-17 01:20:49 | INFO  | Flavor SCS-8V-32-100 created 2026-03-17 01:20:50.400060 | orchestrator | 2026-03-17 01:20:49 | INFO  | Flavor SCS-16V-32 created 2026-03-17 01:20:50.400064 | orchestrator | 2026-03-17 01:20:49 | INFO  | Flavor SCS-16V-32-100 created 2026-03-17 01:20:50.400068 | orchestrator | 2026-03-17 01:20:49 | INFO  | Flavor SCS-2V-4-20s created 2026-03-17 01:20:50.400072 | orchestrator | 2026-03-17 01:20:49 | INFO  | Flavor SCS-4V-8-50s created 2026-03-17 01:20:50.400075 | orchestrator | 2026-03-17 01:20:50 | INFO  | Flavor SCS-4V-16-100s created 2026-03-17 01:20:50.400080 | orchestrator | 2026-03-17 01:20:50 | INFO  | Flavor SCS-8V-32-100s created 2026-03-17 01:20:51.942352 | orchestrator | 2026-03-17 01:20:51 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-17 01:21:02.104420 | orchestrator | 2026-03-17 01:21:02 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-17 01:21:02.183023 | orchestrator | 2026-03-17 01:21:02 | INFO  | Task 154ded34-86d3-4d9f-836f-5f4c71ca78eb (bootstrap-basic) was prepared for execution. 2026-03-17 01:21:02.183107 | orchestrator | 2026-03-17 01:21:02 | INFO  | It takes a moment until task 154ded34-86d3-4d9f-836f-5f4c71ca78eb (bootstrap-basic) has been started and output is visible here. 2026-03-17 01:21:48.876079 | orchestrator | 2026-03-17 01:21:48.876175 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-17 01:21:48.876187 | orchestrator | 2026-03-17 01:21:48.876194 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-17 01:21:48.876201 | orchestrator | Tuesday 17 March 2026 01:21:05 +0000 (0:00:00.096) 0:00:00.096 ********* 2026-03-17 01:21:48.876209 | orchestrator | ok: [localhost] 2026-03-17 01:21:48.876216 | orchestrator | 2026-03-17 01:21:48.876222 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-17 01:21:48.876228 | orchestrator | Tuesday 17 March 2026 01:21:07 +0000 (0:00:01.938) 0:00:02.034 ********* 2026-03-17 01:21:48.876236 | orchestrator | ok: [localhost] 2026-03-17 01:21:48.876242 | orchestrator | 2026-03-17 01:21:48.876248 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-17 01:21:48.876254 | orchestrator | Tuesday 17 March 2026 01:21:15 +0000 (0:00:08.558) 0:00:10.593 ********* 2026-03-17 01:21:48.876262 | orchestrator | changed: [localhost] 2026-03-17 01:21:48.876270 | orchestrator | 2026-03-17 01:21:48.876276 | orchestrator | TASK [Create public network] *************************************************** 2026-03-17 01:21:48.876282 | orchestrator | Tuesday 17 March 2026 01:21:23 +0000 (0:00:08.233) 0:00:18.826 ********* 2026-03-17 01:21:48.876289 | orchestrator | changed: [localhost] 2026-03-17 01:21:48.876295 | orchestrator | 2026-03-17 01:21:48.876306 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-17 01:21:48.876313 | orchestrator | Tuesday 17 March 2026 01:21:29 +0000 (0:00:05.026) 0:00:23.853 ********* 2026-03-17 01:21:48.876319 | orchestrator | changed: [localhost] 2026-03-17 01:21:48.876326 | orchestrator | 2026-03-17 01:21:48.876333 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-17 01:21:48.876340 | orchestrator | Tuesday 17 March 2026 01:21:35 +0000 (0:00:06.731) 0:00:30.584 ********* 2026-03-17 01:21:48.876346 | orchestrator | changed: [localhost] 2026-03-17 01:21:48.876352 | orchestrator | 2026-03-17 01:21:48.876359 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-17 01:21:48.876365 | orchestrator | Tuesday 17 March 2026 01:21:40 +0000 (0:00:04.783) 0:00:35.367 ********* 2026-03-17 01:21:48.876371 | orchestrator | changed: [localhost] 2026-03-17 01:21:48.876377 | orchestrator | 2026-03-17 01:21:48.876383 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-17 01:21:48.876398 | orchestrator | Tuesday 17 March 2026 01:21:44 +0000 (0:00:04.120) 0:00:39.488 ********* 2026-03-17 01:21:48.876405 | orchestrator | ok: [localhost] 2026-03-17 01:21:48.876412 | orchestrator | 2026-03-17 01:21:48.876419 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-17 01:21:48.876425 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-17 01:21:48.876432 | orchestrator | 2026-03-17 01:21:48.876439 | orchestrator | 2026-03-17 01:21:48.876445 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-17 01:21:48.876451 | orchestrator | Tuesday 17 March 2026 01:21:48 +0000 (0:00:04.017) 0:00:43.505 ********* 2026-03-17 01:21:48.876457 | orchestrator | =============================================================================== 2026-03-17 01:21:48.876463 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.56s 2026-03-17 01:21:48.876540 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.23s 2026-03-17 01:21:48.876547 | orchestrator | Set public network to default ------------------------------------------- 6.73s 2026-03-17 01:21:48.876553 | orchestrator | Create public network --------------------------------------------------- 5.03s 2026-03-17 01:21:48.876560 | orchestrator | Create public subnet ---------------------------------------------------- 4.78s 2026-03-17 01:21:48.876565 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.12s 2026-03-17 01:21:48.876571 | orchestrator | Create manager role ----------------------------------------------------- 4.02s 2026-03-17 01:21:48.876577 | orchestrator | Gathering Facts --------------------------------------------------------- 1.94s 2026-03-17 01:21:50.847398 | orchestrator | 2026-03-17 01:21:50 | INFO  | It takes a moment until task e71c791c-3d3a-46cf-9475-0a53b24f42e5 (image-manager) has been started and output is visible here. 2026-03-17 01:21:53.743393 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2026-03-17 01:21:53.743473 | orchestrator | Failed to contact the endpoint at https://api.testbed.osism.xyz:9292 for discovery. Fallback to using that endpoint as the base url. 2026-03-17 01:21:53.743480 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2026-03-17 01:21:53.743487 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:131 │ 2026-03-17 01:21:53.743492 | orchestrator | │ in create_cli_args │ 2026-03-17 01:21:53.743496 | orchestrator | │ │ 2026-03-17 01:21:53.743501 | orchestrator | │ 128 │ │ logger.add(sys.stderr, format=log_fmt, level=level, colorize= │ 2026-03-17 01:21:53.743505 | orchestrator | │ 129 │ │ │ 2026-03-17 01:21:53.743509 | orchestrator | │ 130 │ │ if __name__ == "__main__" or __name__ == "openstack_image_man │ 2026-03-17 01:21:53.743513 | orchestrator | │ ❱ 131 │ │ │ self.main() │ 2026-03-17 01:21:53.743517 | orchestrator | │ 132 │ │ 2026-03-17 01:21:53.743521 | orchestrator | │ 133 │ def read_image_files(self, return_all_images=False) -> list: │ 2026-03-17 01:21:53.743525 | orchestrator | │ 134 │ │ """Read all YAML files in self.CONF.images""" │ 2026-03-17 01:21:53.743529 | orchestrator | │ │ 2026-03-17 01:21:53.743533 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:258 │ 2026-03-17 01:21:53.743537 | orchestrator | │ in main │ 2026-03-17 01:21:53.743540 | orchestrator | │ │ 2026-03-17 01:21:53.743544 | orchestrator | │ 255 │ │ else: │ 2026-03-17 01:21:53.743548 | orchestrator | │ 256 │ │ │ self.create_connection() │ 2026-03-17 01:21:53.743557 | orchestrator | │ 257 │ │ │ images = self.read_image_files() │ 2026-03-17 01:21:53.743561 | orchestrator | │ ❱ 258 │ │ │ managed_images = self.process_images(images) │ 2026-03-17 01:21:53.743565 | orchestrator | │ 259 │ │ │ │ 2026-03-17 01:21:53.743569 | orchestrator | │ 260 │ │ │ # ignore all non-specified images when using --filter │ 2026-03-17 01:21:53.743573 | orchestrator | │ 261 │ │ │ if self.CONF.filter: │ 2026-03-17 01:21:53.743576 | orchestrator | │ │ 2026-03-17 01:21:53.743580 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:375 │ 2026-03-17 01:21:53.743601 | orchestrator | │ in process_images │ 2026-03-17 01:21:53.743605 | orchestrator | │ │ 2026-03-17 01:21:53.743609 | orchestrator | │ 372 │ │ │ if "image_name" not in image["meta"]: │ 2026-03-17 01:21:53.743613 | orchestrator | │ 373 │ │ │ │ image["meta"]["image_name"] = image["name"] │ 2026-03-17 01:21:53.743617 | orchestrator | │ 374 │ │ │ │ 2026-03-17 01:21:53.743625 | orchestrator | │ ❱ 375 │ │ │ existing_images, imported_image, previous_image = self.pr │ 2026-03-17 01:21:53.743629 | orchestrator | │ 376 │ │ │ │ image, versions, sorted_versions, image["meta"].copy( │ 2026-03-17 01:21:53.743633 | orchestrator | │ 377 │ │ │ ) │ 2026-03-17 01:21:53.743637 | orchestrator | │ 378 │ │ │ managed_images = managed_images.union(existing_images) │ 2026-03-17 01:21:53.743641 | orchestrator | │ │ 2026-03-17 01:21:53.743644 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:548 │ 2026-03-17 01:21:53.743648 | orchestrator | │ in process_image │ 2026-03-17 01:21:53.743652 | orchestrator | │ │ 2026-03-17 01:21:53.743656 | orchestrator | │ 545 │ │ Returns: │ 2026-03-17 01:21:53.743660 | orchestrator | │ 546 │ │ │ Tuple with (existing_images, imported_image, previous_ima │ 2026-03-17 01:21:53.743664 | orchestrator | │ 547 │ │ """ │ 2026-03-17 01:21:53.743668 | orchestrator | │ ❱ 548 │ │ cloud_images = self.get_images() │ 2026-03-17 01:21:53.743672 | orchestrator | │ 549 │ │ │ 2026-03-17 01:21:53.743735 | orchestrator | │ 550 │ │ existing_images: Set[str] = set() │ 2026-03-17 01:21:53.743744 | orchestrator | │ 551 │ │ imported_image = None │ 2026-03-17 01:21:53.743750 | orchestrator | │ │ 2026-03-17 01:21:53.743756 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_image_manager/main.py:469 │ 2026-03-17 01:21:53.743762 | orchestrator | │ in get_images │ 2026-03-17 01:21:53.743768 | orchestrator | │ │ 2026-03-17 01:21:53.743775 | orchestrator | │ 466 │ │ """ │ 2026-03-17 01:21:53.743780 | orchestrator | │ 467 │ │ result = {} │ 2026-03-17 01:21:53.743787 | orchestrator | │ 468 │ │ │ 2026-03-17 01:21:53.743793 | orchestrator | │ ❱ 469 │ │ for image in self.conn.image.images(): │ 2026-03-17 01:21:53.743799 | orchestrator | │ 470 │ │ │ if self.CONF.tag in image.tags and ( │ 2026-03-17 01:21:53.743806 | orchestrator | │ 471 │ │ │ │ image.visibility == "public" │ 2026-03-17 01:21:53.743810 | orchestrator | │ 472 │ │ │ │ or image.owner == self.conn.current_project_id │ 2026-03-17 01:21:53.743814 | orchestrator | │ │ 2026-03-17 01:21:53.743818 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack/service_description.py:91 │ 2026-03-17 01:21:53.743821 | orchestrator | │ in __get__ │ 2026-03-17 01:21:53.743831 | orchestrator | │ │ 2026-03-17 01:21:53.743839 | orchestrator | │ 88 │ │ if instance is None: │ 2026-03-17 01:21:53.743843 | orchestrator | │ 89 │ │ │ return self │ 2026-03-17 01:21:53.743847 | orchestrator | │ 90 │ │ if self.service_type not in instance._proxies: │ 2026-03-17 01:21:53.743850 | orchestrator | │ ❱ 91 │ │ │ proxy = self._make_proxy(instance) │ 2026-03-17 01:21:53.743854 | orchestrator | │ 92 │ │ │ if not isinstance(proxy, _ServiceDisabledProxyShim): │ 2026-03-17 01:21:53.743858 | orchestrator | │ 93 │ │ │ │ # The keystone proxy has a method called get_endpoint │ 2026-03-17 01:21:53.743862 | orchestrator | │ 94 │ │ │ │ # that is about managing keystone endpoints. This is │ 2026-03-17 01:21:53.743866 | orchestrator | │ │ 2026-03-17 01:21:53.743870 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack/service_description.py:293 │ 2026-03-17 01:21:53.743875 | orchestrator | │ in _make_proxy │ 2026-03-17 01:21:53.743880 | orchestrator | │ │ 2026-03-17 01:21:53.743884 | orchestrator | │ 290 │ │ if found_version is None: │ 2026-03-17 01:21:53.743888 | orchestrator | │ 291 │ │ │ region_name = instance.config.get_region_name(self.service │ 2026-03-17 01:21:53.743892 | orchestrator | │ 292 │ │ │ if version_kwargs: │ 2026-03-17 01:21:53.743896 | orchestrator | │ ❱ 293 │ │ │ │ raise exceptions.NotSupported( │ 2026-03-17 01:21:53.743900 | orchestrator | │ 294 │ │ │ │ │ f"The {self.service_type} service for " │ 2026-03-17 01:21:53.743904 | orchestrator | │ 295 │ │ │ │ │ f"{instance.name}:{region_name} exists but does no │ 2026-03-17 01:21:53.743908 | orchestrator | │ 296 │ │ │ │ │ f"any supported versions." │ 2026-03-17 01:21:53.743921 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2026-03-17 01:21:53.743930 | orchestrator | NotSupported: The image service for admin: exists but does not have any 2026-03-17 01:21:53.743941 | orchestrator | supported versions. 2026-03-17 01:21:54.048361 | orchestrator | ERROR 2026-03-17 01:21:54.048594 | orchestrator | { 2026-03-17 01:21:54.048632 | orchestrator | "delta": "0:01:17.661580", 2026-03-17 01:21:54.048656 | orchestrator | "end": "2026-03-17 01:21:53.935734", 2026-03-17 01:21:54.048678 | orchestrator | "msg": "non-zero return code", 2026-03-17 01:21:54.048698 | orchestrator | "rc": 1, 2026-03-17 01:21:54.048717 | orchestrator | "start": "2026-03-17 01:20:36.274154" 2026-03-17 01:21:54.048736 | orchestrator | } failure 2026-03-17 01:21:54.061280 | 2026-03-17 01:21:54.061372 | PLAY RECAP 2026-03-17 01:21:54.061441 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-03-17 01:21:54.061616 | 2026-03-17 01:21:54.669468 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-17 01:21:54.670519 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-17 01:21:55.979045 | 2026-03-17 01:21:55.979211 | PLAY [Post output play] 2026-03-17 01:21:56.012153 | 2026-03-17 01:21:56.012268 | LOOP [stage-output : Register sources] 2026-03-17 01:21:56.091769 | 2026-03-17 01:21:56.091941 | TASK [stage-output : Check sudo] 2026-03-17 01:21:56.996207 | orchestrator | sudo: a password is required 2026-03-17 01:21:57.136589 | orchestrator | ok: Runtime: 0:00:00.011896 2026-03-17 01:21:57.143708 | 2026-03-17 01:21:57.143808 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-17 01:21:57.181062 | 2026-03-17 01:21:57.181245 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-17 01:21:57.261653 | orchestrator | ok 2026-03-17 01:21:57.269649 | 2026-03-17 01:21:57.269754 | LOOP [stage-output : Ensure target folders exist] 2026-03-17 01:21:57.932793 | orchestrator | ok: "docs" 2026-03-17 01:21:57.933000 | 2026-03-17 01:21:58.231356 | orchestrator | ok: "artifacts" 2026-03-17 01:21:58.518853 | orchestrator | ok: "logs" 2026-03-17 01:21:58.529536 | 2026-03-17 01:21:58.529622 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-17 01:21:58.567671 | 2026-03-17 01:21:58.567830 | TASK [stage-output : Make all log files readable] 2026-03-17 01:21:58.908006 | orchestrator | ok 2026-03-17 01:21:58.912986 | 2026-03-17 01:21:58.913067 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-17 01:21:58.961141 | orchestrator | skipping: Conditional result was False 2026-03-17 01:21:58.967719 | 2026-03-17 01:21:58.967807 | TASK [stage-output : Discover log files for compression] 2026-03-17 01:21:58.983324 | orchestrator | skipping: Conditional result was False 2026-03-17 01:21:58.989958 | 2026-03-17 01:21:58.990040 | LOOP [stage-output : Archive everything from logs] 2026-03-17 01:21:59.023932 | 2026-03-17 01:21:59.024060 | PLAY [Post cleanup play] 2026-03-17 01:21:59.038544 | 2026-03-17 01:21:59.038641 | TASK [Set cloud fact (Zuul deployment)] 2026-03-17 01:21:59.105280 | orchestrator | ok 2026-03-17 01:21:59.112657 | 2026-03-17 01:21:59.112748 | TASK [Set cloud fact (local deployment)] 2026-03-17 01:21:59.136923 | orchestrator | skipping: Conditional result was False 2026-03-17 01:21:59.143121 | 2026-03-17 01:21:59.143199 | TASK [Clean the cloud environment] 2026-03-17 01:22:00.525298 | orchestrator | 2026-03-17 01:22:00 - clean up servers 2026-03-17 01:22:01.256678 | orchestrator | 2026-03-17 01:22:01 - testbed-manager 2026-03-17 01:22:01.344250 | orchestrator | 2026-03-17 01:22:01 - testbed-node-3 2026-03-17 01:22:01.426629 | orchestrator | 2026-03-17 01:22:01 - testbed-node-4 2026-03-17 01:22:01.516011 | orchestrator | 2026-03-17 01:22:01 - testbed-node-0 2026-03-17 01:22:01.602446 | orchestrator | 2026-03-17 01:22:01 - testbed-node-2 2026-03-17 01:22:01.688664 | orchestrator | 2026-03-17 01:22:01 - testbed-node-5 2026-03-17 01:22:01.778859 | orchestrator | 2026-03-17 01:22:01 - testbed-node-1 2026-03-17 01:22:01.869565 | orchestrator | 2026-03-17 01:22:01 - clean up keypairs 2026-03-17 01:22:01.886754 | orchestrator | 2026-03-17 01:22:01 - testbed 2026-03-17 01:22:01.910089 | orchestrator | 2026-03-17 01:22:01 - wait for servers to be gone 2026-03-17 01:22:14.905799 | orchestrator | 2026-03-17 01:22:14 - clean up ports 2026-03-17 01:22:15.107372 | orchestrator | 2026-03-17 01:22:15 - 109ca916-7968-43be-ba77-b1a54baa947a 2026-03-17 01:22:15.598447 | orchestrator | 2026-03-17 01:22:15 - 3c15376c-4668-40ce-996f-c18cfa243e10 2026-03-17 01:22:15.876269 | orchestrator | 2026-03-17 01:22:15 - 63da57dc-05f5-46f0-b52d-d5851123c70f 2026-03-17 01:22:16.130868 | orchestrator | 2026-03-17 01:22:16 - 66c2d25a-954e-4b07-96f5-c79d86fa48b5 2026-03-17 01:22:16.418231 | orchestrator | 2026-03-17 01:22:16 - 8f2052bc-aaa1-430d-bb9e-446848657f23 2026-03-17 01:22:16.640120 | orchestrator | 2026-03-17 01:22:16 - ac617e09-6561-4b97-b354-272d563398d6 2026-03-17 01:22:16.845013 | orchestrator | 2026-03-17 01:22:16 - e7e7140f-27fa-432d-9c28-bf8d272cb198 2026-03-17 01:22:17.054067 | orchestrator | 2026-03-17 01:22:17 - clean up volumes 2026-03-17 01:22:17.169187 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-5-node-base 2026-03-17 01:22:17.209274 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-3-node-base 2026-03-17 01:22:17.247657 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-manager-base 2026-03-17 01:22:17.295082 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-0-node-base 2026-03-17 01:22:17.335364 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-4-node-base 2026-03-17 01:22:17.378120 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-1-node-base 2026-03-17 01:22:17.419395 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-2-node-base 2026-03-17 01:22:17.460207 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-7-node-4 2026-03-17 01:22:17.501826 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-4-node-4 2026-03-17 01:22:17.544488 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-5-node-5 2026-03-17 01:22:17.585707 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-2-node-5 2026-03-17 01:22:17.631984 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-1-node-4 2026-03-17 01:22:17.674717 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-8-node-5 2026-03-17 01:22:17.719026 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-3-node-3 2026-03-17 01:22:17.766458 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-6-node-3 2026-03-17 01:22:17.806813 | orchestrator | 2026-03-17 01:22:17 - testbed-volume-0-node-3 2026-03-17 01:22:17.850123 | orchestrator | 2026-03-17 01:22:17 - disconnect routers 2026-03-17 01:22:17.925308 | orchestrator | 2026-03-17 01:22:17 - testbed 2026-03-17 01:22:18.917570 | orchestrator | 2026-03-17 01:22:18 - clean up subnets 2026-03-17 01:22:18.965065 | orchestrator | 2026-03-17 01:22:18 - subnet-testbed-management 2026-03-17 01:22:19.147036 | orchestrator | 2026-03-17 01:22:19 - clean up networks 2026-03-17 01:22:19.313540 | orchestrator | 2026-03-17 01:22:19 - net-testbed-management 2026-03-17 01:22:19.605108 | orchestrator | 2026-03-17 01:22:19 - clean up security groups 2026-03-17 01:22:19.651361 | orchestrator | 2026-03-17 01:22:19 - testbed-management 2026-03-17 01:22:19.799031 | orchestrator | 2026-03-17 01:22:19 - testbed-node 2026-03-17 01:22:19.893398 | orchestrator | 2026-03-17 01:22:19 - clean up floating ips 2026-03-17 01:22:19.930241 | orchestrator | 2026-03-17 01:22:19 - 81.163.192.14 2026-03-17 01:22:20.270249 | orchestrator | 2026-03-17 01:22:20 - clean up routers 2026-03-17 01:22:20.337660 | orchestrator | 2026-03-17 01:22:20 - testbed 2026-03-17 01:22:21.719432 | orchestrator | ok: Runtime: 0:00:22.199112 2026-03-17 01:22:21.723331 | 2026-03-17 01:22:21.723432 | PLAY RECAP 2026-03-17 01:22:21.723513 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-17 01:22:21.723539 | 2026-03-17 01:22:21.918771 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-17 01:22:21.920071 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-17 01:22:22.727199 | 2026-03-17 01:22:22.727432 | PLAY [Cleanup play] 2026-03-17 01:22:22.744026 | 2026-03-17 01:22:22.744169 | TASK [Set cloud fact (Zuul deployment)] 2026-03-17 01:22:22.798987 | orchestrator | ok 2026-03-17 01:22:22.807837 | 2026-03-17 01:22:22.807981 | TASK [Set cloud fact (local deployment)] 2026-03-17 01:22:22.843303 | orchestrator | skipping: Conditional result was False 2026-03-17 01:22:22.855626 | 2026-03-17 01:22:22.855774 | TASK [Clean the cloud environment] 2026-03-17 01:22:24.131602 | orchestrator | 2026-03-17 01:22:24 - clean up servers 2026-03-17 01:22:24.608859 | orchestrator | 2026-03-17 01:22:24 - clean up keypairs 2026-03-17 01:22:24.626079 | orchestrator | 2026-03-17 01:22:24 - wait for servers to be gone 2026-03-17 01:22:24.669258 | orchestrator | 2026-03-17 01:22:24 - clean up ports 2026-03-17 01:22:24.746869 | orchestrator | 2026-03-17 01:22:24 - clean up volumes 2026-03-17 01:22:24.810131 | orchestrator | 2026-03-17 01:22:24 - disconnect routers 2026-03-17 01:22:24.838666 | orchestrator | 2026-03-17 01:22:24 - clean up subnets 2026-03-17 01:22:24.861503 | orchestrator | 2026-03-17 01:22:24 - clean up networks 2026-03-17 01:22:24.991272 | orchestrator | 2026-03-17 01:22:24 - clean up security groups 2026-03-17 01:22:25.025770 | orchestrator | 2026-03-17 01:22:25 - clean up floating ips 2026-03-17 01:22:25.051981 | orchestrator | 2026-03-17 01:22:25 - clean up routers 2026-03-17 01:22:25.398155 | orchestrator | ok: Runtime: 0:00:01.399903 2026-03-17 01:22:25.400622 | 2026-03-17 01:22:25.400724 | PLAY RECAP 2026-03-17 01:22:25.400795 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-17 01:22:25.400830 | 2026-03-17 01:22:25.547584 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-17 01:22:25.548808 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-17 01:22:26.366701 | 2026-03-17 01:22:26.367047 | PLAY [Base post-fetch] 2026-03-17 01:22:26.390179 | 2026-03-17 01:22:26.390337 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-17 01:22:26.446669 | orchestrator | skipping: Conditional result was False 2026-03-17 01:22:26.459679 | 2026-03-17 01:22:26.460023 | TASK [fetch-output : Set log path for single node] 2026-03-17 01:22:26.510801 | orchestrator | ok 2026-03-17 01:22:26.523324 | 2026-03-17 01:22:26.523518 | LOOP [fetch-output : Ensure local output dirs] 2026-03-17 01:22:27.070320 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/3e0e57a4161f4df9aa9619c57544ea04/work/logs" 2026-03-17 01:22:27.388772 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3e0e57a4161f4df9aa9619c57544ea04/work/artifacts" 2026-03-17 01:22:27.757605 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/3e0e57a4161f4df9aa9619c57544ea04/work/docs" 2026-03-17 01:22:27.772935 | 2026-03-17 01:22:27.773082 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-17 01:22:28.734516 | orchestrator | changed: .d..t...... ./ 2026-03-17 01:22:28.734821 | orchestrator | changed: All items complete 2026-03-17 01:22:28.734903 | 2026-03-17 01:22:29.504093 | orchestrator | changed: .d..t...... ./ 2026-03-17 01:22:30.245835 | orchestrator | changed: .d..t...... ./ 2026-03-17 01:22:30.273652 | 2026-03-17 01:22:30.273795 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-17 01:22:30.318561 | orchestrator | skipping: Conditional result was False 2026-03-17 01:22:30.320988 | orchestrator | skipping: Conditional result was False 2026-03-17 01:22:30.350114 | 2026-03-17 01:22:30.350298 | PLAY RECAP 2026-03-17 01:22:30.350409 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-17 01:22:30.350490 | 2026-03-17 01:22:30.542807 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-17 01:22:30.544668 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-17 01:22:31.344518 | 2026-03-17 01:22:31.344854 | PLAY [Base post] 2026-03-17 01:22:31.366884 | 2026-03-17 01:22:31.367088 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-17 01:22:32.660165 | orchestrator | changed 2026-03-17 01:22:32.669371 | 2026-03-17 01:22:32.669511 | PLAY RECAP 2026-03-17 01:22:32.669576 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-17 01:22:32.669639 | 2026-03-17 01:22:32.806630 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-17 01:22:32.809537 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-17 01:22:33.724563 | 2026-03-17 01:22:33.724793 | PLAY [Base post-logs] 2026-03-17 01:22:33.737663 | 2026-03-17 01:22:33.737825 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-17 01:22:34.236679 | localhost | changed 2026-03-17 01:22:34.247421 | 2026-03-17 01:22:34.247611 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-17 01:22:34.286530 | localhost | ok 2026-03-17 01:22:34.292370 | 2026-03-17 01:22:34.292592 | TASK [Set zuul-log-path fact] 2026-03-17 01:22:34.320726 | localhost | ok 2026-03-17 01:22:34.334870 | 2026-03-17 01:22:34.334996 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-17 01:22:34.369092 | localhost | ok 2026-03-17 01:22:34.372443 | 2026-03-17 01:22:34.372601 | TASK [upload-logs : Create log directories] 2026-03-17 01:22:34.995444 | localhost | changed 2026-03-17 01:22:35.005218 | 2026-03-17 01:22:35.005426 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-17 01:22:35.584581 | localhost -> localhost | ok: Runtime: 0:00:00.011411 2026-03-17 01:22:35.590046 | 2026-03-17 01:22:35.590171 | TASK [upload-logs : Upload logs to log server] 2026-03-17 01:22:36.164481 | localhost | Output suppressed because no_log was given 2026-03-17 01:22:36.166358 | 2026-03-17 01:22:36.166488 | LOOP [upload-logs : Compress console log and json output] 2026-03-17 01:22:36.272215 | localhost | skipping: Conditional result was False 2026-03-17 01:22:36.286133 | localhost | skipping: Conditional result was False 2026-03-17 01:22:36.296148 | 2026-03-17 01:22:36.296297 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-17 01:22:36.343184 | localhost | skipping: Conditional result was False 2026-03-17 01:22:36.343555 | 2026-03-17 01:22:36.348211 | localhost | skipping: Conditional result was False 2026-03-17 01:22:36.354722 | 2026-03-17 01:22:36.354932 | LOOP [upload-logs : Upload console log and json output]